How to Generate and Validate UUIDs in Any Programming Language
Learn what UUIDs are, understand the differences between UUID versions (v1-v5), when to use each version, and how to generate UUIDs in JavaScript, Python, Go, and Java.
I avoided binary and hex for years - until a production server went down because I set wrong permissions. That night I finally learned what chmod 755 actually means. Now I convert between bases constantly at Šikulovi s.r.o..
For years, I just memorized chmod 755 and chmod 644 without understanding why. Then one day a senior developer asked me to set specific permissions, and I had to actually think about what the numbers meant. That was when I finally learned that 755 is really 111-101-101 in binary.
Each digit represents read (4), write (2), and execute (1) permissions. Add them up and you get 7 for full access, 5 for read-execute, 4 for read-only. Once I understood the binary representation, Unix permissions became intuitive instead of magical incantations.
Modern programming abstracts away most low-level details, but number bases keep showing up. Colors in CSS and design tools use hexadecimal. Permissions and flags use binary. Memory addresses in debuggers are hex. Network protocols often display data in hex or binary.
I used to pick colors with a color picker and never think about what #3498db actually meant. Then I needed to programmatically darken a color by 20%, and I realized I had no idea how hex colors worked.
Each pair of hex digits represents one color channel from 0 to 255. #3498db breaks down to red=52 (0x34), green=152 (0x98), blue=219 (0xdb). To darken by 20%, multiply each by 0.8. Now I can do basic color math in my head - #000000 is black, #FFFFFF is white, #FF0000 is pure red.
Bitwise operations seemed like interview trivia until I worked on a feature flags system. We stored 32 boolean features in a single integer. Each bit represented one feature. Checking if feature 5 was enabled meant checking if bit 5 was set.
The pattern is simple once you see it. To check a bit: (flags & (1 << n)) !== 0. To set a bit: flags | (1 << n). To clear a bit: flags & ~(1 << n). Understanding binary makes these operations obvious instead of cryptic.
When a crash log shows an address like 0x00007fff5fbff8c0, it looks intimidating. But hex is just a compact way to write binary. Each hex digit represents exactly 4 bits, so a 64-bit address is exactly 16 hex digits.
I have learned to recognize patterns in memory addresses. Stack addresses on my Mac start with 0x7fff. Heap addresses look different. NULL pointer dereferences show up as access to addresses near 0x0000000000000000. These patterns help me quickly categorize crashes before diving into detailed analysis.
You do not need to convert large numbers by hand, but knowing small conversions helps. I have memorized hex 0-F (0-15 in decimal) and common binary patterns like 1111=15, 1000=8, 0111=7.
For larger numbers, I chunk them. To convert 0xA5 to decimal: A=10, 5=5, so (10*16)+5 = 165. To convert 200 to hex: 200/16=12 remainder 8, so 0xC8. These quick estimates help me sanity-check results from converters.
The converter handles all the common bases - binary (2), octal (8), decimal (10), and hexadecimal (16). I use it constantly when debugging or working with low-level code. Type in any base and instantly see the value in all others.
I added support for common programming prefixes too. You can enter 0xFF, 0b1010, or 0o755 and it recognizes the format automatically. The bit visualization shows which bits are set, which is incredibly helpful for understanding bitwise operations.
Hex is more compact. Binary 11111111 is hex FF - much easier to read and type. Each hex digit represents exactly 4 binary digits, so conversion is trivial. It's the sweet spot between human readability and binary representation.
Mostly Unix file permissions. The chmod command uses octal because each digit (0-7) perfectly represents 3 bits (read, write, execute). You'll also see octal in some older systems and C escape sequences like \077.
Negative numbers use two's complement representation in most systems. The converter shows this for signed integers. Basically, the highest bit indicates the sign, and the rest is inverted and incremented for negatives.
JavaScript converts numbers to 32-bit signed integers for bitwise operations. If you're working with larger numbers or expecting unsigned results, you can get unexpected values. Use >>> 0 to force unsigned interpretation.
Store in whatever your system expects - usually binary/native integers. Display in whatever makes sense for humans. Hex for colors and addresses, decimal for quantities, binary for bit flags. The base is just representation.
That's a whole other topic. IEEE 754 floating-point uses sign bit, exponent, and mantissa. It's why 0.1 + 0.2 !== 0.3 in JavaScript. For most development, just know that floats are approximations and use integers for money.
Founder of CodeUtil. Web developer building tools I actually use. When I'm not coding, I experiment with productivity techniques (with mixed success).
Learn what UUIDs are, understand the differences between UUID versions (v1-v5), when to use each version, and how to generate UUIDs in JavaScript, Python, Go, and Java.
camelCase, snake_case, kebab-case - why do we have so many? Here's what I've learned about naming conventions, when to use each style, and why consistency matters more than the 'right' choice.