Technical
Unix Timestamps: From Epoch to Modern
personWritten by Sofia Thornwood
•calendar_todayFebruary 9, 2026
•schedule8 min read
Every file you save, every database record you create, and every API request you send carries a timestamp. Behind the human-readable dates lies a simple integer — the number of seconds since January 1, 1970. This single number, the Unix timestamp, is the backbone of how computers track time across operating systems, programming languages, and databases worldwide.
What is a Unix timestamp
A Unix timestamp (also called Epoch time or POSIX time) is the number of seconds that have elapsed since January 1, 1970, at 00:00:00 UTC. This moment is known as the Unix Epoch.
Key characteristics:
• It is a single integer — no time zones, no formatting, no ambiguity
• It increases by exactly 1 for every second that passes
• Negative values represent dates before 1970 (e.g., -86400 = December 31, 1969)
• The value is always in UTC, which eliminates timezone confusion when storing or transmitting time
For example:
• 0 = January 1, 1970, 00:00:00 UTC
• 1000000000 = September 9, 2001, 01:46:40 UTC
• 1700000000 = November 14, 2023, 22:13:20 UTC
This simplicity is precisely why Unix timestamps became the universal standard for representing time in computing.
Why 1970? The origin of Epoch
The choice of January 1, 1970, as the Epoch was practical, not symbolic. When Ken Thompson and Dennis Ritchie developed Unix at Bell Labs in the late 1960s, they needed a reference point for their time system.
The original Unix time used a 32-bit unsigned integer counting in sixtieths of a second, but this overflowed too quickly. The engineers switched to a 32-bit signed integer counting whole seconds, which gave a range of about 136 years. They chose January 1, 1970, because it was a recent, round date that balanced past and future coverage.
Other systems chose different epochs:
• Windows FILETIME: January 1, 1601 (start of a 400-year Gregorian calendar cycle)
• macOS Classic: January 1, 1904
• GPS time: January 6, 1980
• Java and JavaScript: Unix Epoch, but in milliseconds instead of seconds
Despite the variety, Unix Epoch became the de facto standard because of the explosive growth of Unix-based systems and the internet.
Common timestamp formats
While the core concept is simple, timestamps appear in several formats across different systems and languages.
Seconds (Unix standard):
• 10 digits for current dates (e.g., 1709913600)
• Used by: Unix/Linux, Python time.time(), PHP time(), Ruby Time.now.to_i
• Range: covers dates from 1901 to 2038 in 32-bit signed format
Milliseconds:
• 13 digits for current dates (e.g., 1709913600000)
• Used by: JavaScript Date.now(), Java System.currentTimeMillis(), many REST APIs
• Common in browser-based applications and NoSQL databases
Microseconds:
• 16 digits (e.g., 1709913600000000)
• Used by: Python datetime.timestamp() with microsecond precision, PostgreSQL
Nanoseconds:
• 19 digits (e.g., 1709913600000000000)
• Used by: Go time.Now().UnixNano(), high-frequency trading systems
ISO 8601:
• Human-readable format: 2024-03-08T16:00:00Z
• The Z suffix means UTC; offsets look like +05:30 or -04:00
• Preferred for APIs, logs, and data interchange because it is unambiguous and sortable
Quick identification tip: count the digits. 10 = seconds, 13 = milliseconds, 16 = microseconds, 19 = nanoseconds.
The Y2038 problem
The Year 2038 problem is the Unix equivalent of the Y2K bug. On January 19, 2038, at 03:14:07 UTC, a 32-bit signed integer storing seconds since Epoch will overflow — wrapping around to a negative number that represents December 13, 1901.
Why it matters:
• Embedded systems (routers, IoT devices, industrial controllers) often use 32-bit timestamps
• Legacy databases and file formats may store time as 32-bit integers
• Some programming languages default to 32-bit time functions on older platforms
• Financial systems calculating dates 10+ years in the future are already affected
The fix:
• Modern 64-bit systems use 64-bit time_t, which extends the range to approximately 292 billion years — effectively forever
• Linux kernel moved to 64-bit time internally starting with version 5.6 (2020)
• Most modern languages (Python 3, Java, Go, Rust) use 64-bit timestamps by default
What you should do:
• Audit legacy code that stores timestamps as 32-bit integers
• Update embedded systems firmware when patches become available
• Test your applications with dates beyond January 19, 2038
• Use 64-bit storage for any new timestamp fields in databases
Use cases in development
Unix timestamps appear throughout the software stack, often in places developers don't immediately think about.
Database records:
• Created_at and updated_at columns are frequently stored as integers for indexing speed
• Comparing timestamps is a simple integer comparison, faster than parsing date strings
• Partitioning tables by time ranges works naturally with integer boundaries
API design:
• JWT tokens use iat (issued at) and exp (expiration) fields as Unix timestamps in seconds
• Rate limiting often tracks request counts in per-second or per-minute windows using timestamps
• Webhook signatures sometimes include a timestamp to prevent replay attacks (reject if older than 5 minutes)
Caching:
• HTTP Cache-Control max-age is specified in seconds
• CDN cache keys often include a timestamp parameter to force cache invalidation
• Browser localStorage entries can store an expiry timestamp for manual TTL management
File systems:
• File modification times (mtime), access times (atime), and change times (ctime) are stored as Unix timestamps
• Build tools use file timestamps to determine which files need recompilation
• Version control systems record commit timestamps to establish chronological order
Logging and monitoring:
• Log aggregation systems index entries by timestamp for efficient time-range queries
• Metrics databases (InfluxDB, Prometheus) use nanosecond timestamps for precision
• Distributed tracing correlates events across services using synchronized timestamps
Converting between formats
Converting timestamps is one of the most common developer tasks, whether debugging an API response or writing a migration script.
Command line (Linux/macOS):
• Timestamp to date: date -d @1709913600 (GNU) or date -r 1709913600 (BSD)
• Date to timestamp: date -d '2024-03-08 16:00:00 UTC' +%s
JavaScript:
• Timestamp to Date: new Date(1709913600 * 1000) — multiply by 1000 because JS uses milliseconds
• Date to timestamp: Math.floor(Date.now() / 1000) — divide by 1000 to get seconds
• ISO string: new Date(1709913600000).toISOString()
Python:
• Timestamp to datetime: datetime.fromtimestamp(1709913600, tz=timezone.utc)
• Datetime to timestamp: datetime.now(timezone.utc).timestamp()
• Always specify UTC to avoid local timezone surprises
Common pitfalls:
• Forgetting the seconds-vs-milliseconds conversion (off by a factor of 1000)
• Applying local timezone offset when UTC was intended
• Using 32-bit integer types for storage when 64-bit is available
• Comparing timestamps from systems using different precisions without normalizing
For quick conversions without writing code, an online timestamp converter tool lets you paste any format and instantly see the result in all other formats.
Best practices for timestamp handling
Following a few consistent rules prevents the most common timestamp-related bugs.
Storage:
• Always store timestamps in UTC — convert to local time only at the display layer
• Use 64-bit integers (BIGINT in SQL) for Unix timestamps to avoid the Y2038 problem
• For databases that support native datetime types (TIMESTAMP WITH TIME ZONE in PostgreSQL), prefer those over raw integers for better query ergonomics
Transmission:
• Use ISO 8601 format with explicit timezone for human-readable APIs: 2024-03-08T16:00:00Z
• Use Unix timestamps in seconds for machine-to-machine communication where compactness matters
• Always document whether your API returns seconds or milliseconds
Comparison:
• Normalize precision before comparing (don't compare seconds to milliseconds)
• Account for clock skew in distributed systems — allow a small tolerance window
• Use monotonic clocks for measuring elapsed time within a process (wall clock can jump during NTP sync)
Testing:
• Inject time as a dependency so tests can use fixed timestamps
• Test with timestamps near boundary values: 0, negative, Y2038 overflow, year 9999
• Verify behavior across daylight saving time transitions
Conclusion
Unix timestamps are deceptively simple — a single number that powers the entire temporal infrastructure of modern computing. Understanding their formats, limitations, and conversion patterns saves hours of debugging and prevents subtle data corruption. Whether you're parsing API responses, debugging JWT tokens, or designing a database schema, a solid grasp of timestamp fundamentals is essential. Try our timestamp converter tool to quickly convert between Unix timestamps, ISO 8601, and human-readable dates in any timezone.
Frequently Asked Questions
They are the same thing. Unix timestamp, Epoch time, and POSIX time all refer to the number of seconds elapsed since January 1, 1970, at 00:00:00 UTC. The terms are used interchangeably across documentation and programming communities.