This page is still under development...

The Time Command

The time command is used to know how long a command takes to run.

How To Use The Time Command

Time syntax:

time <command> <command switches>

For example, to test the performance of a script or a command you

time sh ./your_script.sh

The results from the time command will be as follows:

real    0m41.926s
user    0m0.640s
sys 0m1.920s

It shows:

Real refers to actual elapsed time; User and Sys refer to CPU time used only by the process.

Options: -p print the timing summary in the portable Posix format

If you want to time the run of several commands, do as follows:

time { cmd1; cmd2; etc. }

For more: (source: http://stackoverflow.com and stackexhange.com)

Why is the run time different each time?

Benchmarking is a difficult science:

  1. The CPU keeps often-requested data in caches. As accessing memory in a cache is faster, this can warp benchmarks
  2. The OS may cache files in memory rather than loading them from disk. Subsequent runs should be faster.
  3. Time-sharing operating system kernels like Linux, Mach or Windows NT juggle multiple processes at the same time.This means that other processes may be executed in between of execution phases of your benchmarked program. Therefore measuring the elapsed time (real time) is inaccurate (it likely also includes execution time from other processes). You should be measuring the used CPU time instead.

Which results from the time command is meaningful?

User+Sys will tell you how much actual CPU time your process used. Note that this is across all CPUs, so if the process has multiple threads (and this process is running on a computer with more than one processor) it could potentially exceed the wall clock time reported by Real (which usually occurs). Note that in the output these figures include the User and Sys time of all child processes (and their descendants)...