Unix Timestamps: Working with Dates Across Time Zones
Learn what Unix timestamps are, how to convert them in various programming languages, handle timezone challenges, and understand the Y2K38 problem.
The timezone bug that cost me a weekend
Early in my career at Šikulovi s.r.o., I had a project where scheduled reports were arriving at the wrong time for international clients. Some got them at 3 AM. Others missed them entirely. The issue? I was storing formatted date strings instead of Unix timestamps. Different servers, different locales, absolute chaos.
A Unix timestamp is simply the number of seconds since January 1, 1970, at 00:00:00 UTC. That's it. One number. Same number in Prague, same number in Tokyo, same number in New York. That weekend debugging session taught me: when in doubt, use timestamps.
Why I default to timestamps now
After that incident, I switched to timestamps for everything time-related. Here's why they save so much trouble:
- Timezone independence: 1737500000 means the same moment everywhere on Earth
- Easy comparison: Is event A before event B? Just compare two numbers
- Compact storage: One integer vs a formatted string with timezone info
- No ambiguity: No more "is this MM/DD or DD/MM?" confusion
- Simple arithmetic: Need to add 24 hours? Add 86400. Done.
- Database friendly: Integers index and sort way faster than date strings
The seconds vs milliseconds trap
I've lost count of how many bugs I've seen from mixing up seconds and milliseconds. JavaScript uses milliseconds. Python uses seconds. Some APIs use one, some use the other. The trick? Count the digits.
- Seconds: 10 digits (e.g., 1737500000) - Python, PHP, most databases
- Milliseconds: 13 digits (e.g., 1737500000000) - JavaScript, Java
- JavaScript Date.now() returns milliseconds - always
- Python time.time() returns seconds as a float
- Database TIMESTAMP columns typically store seconds
- APIs vary - always check the docs, never assume
- Convert: ms / 1000 = seconds, seconds * 1000 = ms (simple but often forgotten)
JavaScript: the milliseconds language
JavaScript decided to be different and use milliseconds everywhere. Once you accept that, conversions are straightforward. Here's my cheat sheet:
- Current timestamp: Date.now() - simple, fast, milliseconds
- Timestamp to Date: new Date(ms) or new Date(seconds * 1000)
- Date to timestamp: date.getTime() - always milliseconds
- Parse a string: new Date("2026-01-22").getTime()
- UTC output: date.toISOString() - consistent, no timezone surprises
- Timezone offset: date.getTimezoneOffset() - minutes from UTC (warning: sign is inverted)
Python: seconds by default
Python's time.time() returns seconds as a float, which is actually nice because you get sub-second precision when you need it. The datetime module is powerful but has some gotchas:
- Current timestamp: time.time() returns float seconds
- Timestamp to datetime: datetime.fromtimestamp(ts) - but this uses LOCAL time!
- For UTC: datetime.utcfromtimestamp(ts) or datetime.fromtimestamp(ts, tz=timezone.utc)
- Datetime to timestamp: dt.timestamp() - Python 3.3+
- Parse string: datetime.strptime(string, format).timestamp()
- Need milliseconds? int(time.time() * 1000)
Quick reference for other languages
Every language has its own way, but the pattern is always the same: get current time, convert from timestamp, convert to timestamp. Here's what I keep bookmarked:
- PHP: time() for seconds, strtotime() for parsing - simple and reliable
- Java: System.currentTimeMillis() (ms) or Instant.now().getEpochSecond()
- Go: time.Now().Unix() for seconds, time.Now().UnixMilli() for ms
- Ruby: Time.now.to_i for seconds, Time.at(timestamp) to convert back
- C#: DateTimeOffset.Now.ToUnixTimeSeconds() - verbose but clear
- Rust: SystemTime::now().duration_since(UNIX_EPOCH) - typical Rust verbosity
Timezones: where everything goes wrong
Remember that bug I mentioned at the start? Timezones. Unix timestamps are UTC by definition - the confusion happens when you convert to local time. My rule at Šikulovi s.r.o.: be paranoid about timezones.
- Store UTC, display local - always this direction, never backwards
- Never assume server timezone = user timezone (learned this the hard way)
- Use timezone-aware datetime objects - naive datetimes are time bombs
- API responses: ISO 8601 with offset or just send timestamps
- Test with users in multiple timezones before launch
- Daylight saving transitions will break things you thought were bulletproof
Timezone bugs I keep seeing
Code review at Šikulovi s.r.o. has shown me the same timezone mistakes over and over. If you're making one of these, you're not alone - but fix it:
- Using local time functions when UTC is needed - datetime.now() vs datetime.utcnow()
- Ignoring DST transitions - "it worked yesterday" is not a good debugging strategy
- Assuming all days have 24 hours - DST days have 23 or 25, plan accordingly
- Storing local times without timezone info - future you will hate past you
- Comparing timestamps from different sources without normalizing
- Forgetting weird offsets - India is +05:30, Nepal is +05:45, yes really
Y2K38: the bug waiting to happen
If you use 32-bit signed integers for timestamps, your code has an expiration date: January 19, 2038, 03:14:07 UTC. That's when the number overflows. I've already found this in legacy code at Šikulovi s.r.o..
- 32-bit signed max: 2,147,483,647 seconds since 1970 = 2038-01-19
- After overflow: timestamps go negative, suddenly it's 1901
- Fix: Use 64-bit integers - they last billions of years
- Modern languages use 64-bit by default, but check your databases
- Embedded systems and old MySQL tables are the usual suspects
- Y2K all over again, but for a different reason
Negative timestamps: going before 1970
Fun fact: December 31, 1969, 23:59:59 UTC is timestamp -1. If you're working with historical data, you'll encounter negative timestamps. Not everything handles them well.
- One second before epoch: -1 (1969-12-31 23:59:59 UTC)
- Start of 1900: -2208988800
- JavaScript Date handles negatives - works fine for birthdates
- Some databases choke on negative values - MySQL TIMESTAMP does
- Use DATETIME instead of TIMESTAMP for historical dates in MySQL
- Always test with pre-1970 data if your app needs it
Precision: do you actually need nanoseconds?
For 99% of applications, millisecond precision is overkill. But if you're in high-frequency trading or scientific computing, here's what you need to know:
- Microseconds: PostgreSQL TIMESTAMP(6) - 6 decimal places
- Nanoseconds: Needed for HFT and scientific applications only
- Leap seconds: UTC adds a second occasionally to match Earth rotation
- Unix time ignores leap seconds - every day is exactly 86400 seconds
- This means Unix time drifts slightly from atomic time
- For web apps? Doesn't matter. For satellite systems? Matters a lot.
My timestamp checklist
After years of debugging timezone bugs at Šikulovi s.r.o., these are the rules I follow religiously:
- Store in UTC, always - no exceptions
- 64-bit integers - Y2K38 is real, don't be the one to explain it to management
- Document seconds vs milliseconds in your API docs - future developers will thank you
- Show timezone in the UI - users shouldn't have to guess
- ISO 8601 for string formats - "2026-01-22T15:30:00Z" is unambiguous
- Validate ranges - if a timestamp is negative or year 9999, something is wrong
- Use date-fns or Luxon for complex operations - don't reinvent the wheel
The bottom line
Timestamps aren't hard once you internalize one rule: always store UTC, convert only for display. Every timestamp bug I've debugged ultimately came down to violating this principle somewhere in the chain.
That weekend I spent fixing timezone bugs at Šikulovi s.r.o.? It taught me more about time handling than years of casual coding. Now I'm paranoid about timezones - and my reports arrive when they should.
FAQ
What is the Unix epoch?
The Unix epoch is January 1, 1970, at 00:00:00 UTC. Unix timestamps count the number of seconds (or milliseconds) since this moment. The epoch was chosen as a convenient starting point when Unix was being developed in the early 1970s.
How do I convert a Unix timestamp to a readable date?
In JavaScript: new Date(timestamp * 1000).toISOString() for seconds, or new Date(timestamp).toISOString() for milliseconds. In Python: datetime.fromtimestamp(timestamp). Most programming languages have built-in functions for this conversion.
Why does JavaScript use milliseconds instead of seconds?
JavaScript was designed for web browsers where higher precision timing was useful for animations and user interactions. Milliseconds provide enough precision for most applications while remaining a simple integer. This decision was made early in JavaScript history and remains for backward compatibility.
Will my application break in 2038?
If your application uses 64-bit timestamps (most modern systems), you are safe. If you use 32-bit signed integers for timestamps, they will overflow on January 19, 2038. Check your database column types and language defaults. JavaScript, Python 3, and most modern languages use 64-bit by default.
How do I handle timestamps with timezones?
Always store and transmit timestamps in UTC. Convert to local time only for display. When receiving user input, convert it to UTC immediately. Use timezone-aware datetime libraries and always be explicit about which timezone you are working with.
What is the difference between Unix timestamp and ISO 8601?
A Unix timestamp is a single number representing seconds since 1970. ISO 8601 is a string format like "2026-01-22T15:30:00Z". Timestamps are more compact and easier to compare; ISO 8601 is human-readable and self-documenting. Use timestamps for storage and computation, ISO 8601 for APIs and display.
Founder of CodeUtil. Web developer building tools I actually use. When I'm not coding, I experiment with productivity techniques (with mixed success).