Omniscope Scaling

Managing Omniscope Scaling & Performance

File sizes & typical RAM requirements

How much data can Omniscope handle? Omniscope software has no fixed upper limit on the number of rows and columns that can be managed in a single file. The effective upper limit depends on the specification of the machine running the Omniscope file, and a complex relationship between processor speed, available Windows and Java memory addressing, data types, density of the columns and the number and type of Omniscope views employed in a given file. Effective Omniscope data volume is principally related to the number of cells (rows x columns), while the effective capacity of a computer (for this purpose) is principally related to the addressing space and amount of RAM in the machine. The best way to determine how large data sets (over 5 million cells) or very large data sets (over 15 million cells) will perform in machines with different amounts of RAM is to first try it with 32-bit Omniscope Professional.  If this does not run your data sets properly, try a machine with 64-bit operating system and Java installed and about 4 GB of RAM installed (most database servers are now this minimum specification). If the test on the database server shows that 4 GB of RAM and 64-bit is sufficient, you will have to upgrade your desktop to this specification, or run Omniscope from an account on the server machine.

In general, a recent 32-bit computer with:

  • 512 KB of RAM should handle files of about 5 million typical cells
  • 1.0 GB of RAM should handle files of about 15-17 million typical cells
  • 2 GB of RAM should handle about 20 million typical cells
  • Over 2 GB of RAM cannot be used by 32-bit systems, only by 64-bit systems

The less than proportional increase between 1.0 and 2.0 GB of RAM results because the 32-bit Windows/Java addressing limit is reached at about 1.2 GB...well before the 32-bit computer can utilise its full  2.0 GB. Computers running 64-bit operating systems with 2.0 GB or more of RAM will have much higher limits.  We have documented files of 8 million rows and 15 columns (120,000,000 cells) running on 64-bit servers with 8 GB of RAM, and some prototyping installations are currently using up to 16 GB of RAM.

Running on 64-bit Machines

The 64-bit version of Omniscope is available on every install. Due to delays in the release of some Java libraries, there are some features not yet available on 64-bit. The largest Omniscope file we know of currently running has 34 million rows and runs on a mail-order $US 2,000 desktop machine with 64-bit Windows/Java and 16 GB of RAM. If you are running Omniscope on a 64-bit machine, there are various options you can set to manage memory availability and performance. More info.

Hiding or deleting fields/columns 

If you are dealing with very large data sets, or plan to distribute Omniscope report files to 32-bit desktop machines with 256 MB or less of RAM, you can minimise peak Omniscope memory use by hiding/deleting all unused columns (use Data > Manage Fields > Hide Field). A data field (column) is not loaded into memory until it is actually displayed, so opening views that display many fields, like the Chart View and the Table View, should display only the most useful fields. Fields rarely needed for filtering should only be shown on rear tabs opened only by users who really need to view and filter by the values in that field. Users who never open these tabs will have lower peak RAM requirements and better performance.

Omniscope includes tools to help analyse memory use and optimise very large files for a range of recipient machines. These tools are documented here.

Hiding fields/columns in menus/devices

By default, drop-down menus in Omniscope offer all fields as options for the user to display, or to add to the filter devices on the Sidebar. Often, only a few of the fields will  make sense as options in a given view, or as filters on a given tab. Hiding as many fields as possible from drop-down options/Sidebar devices will generally improve tab-to-tab performance, especially for large ( > 200 columns) column set files. More info.

Views employed

The use of certain views limits scalability and performance in the upper reaches of file size. Omniscope performance in some views is also sensitive to the number of unique category values being used in the view.

Pivot View: currently does not handle very large numbers of categories well, especially on slower processors. If you encounter slow performance, try reducing the number of unique categories used in the Pivot View by setting wider category limits ( 'buckets' ). Future versions will perform better with more unique categories. For more information, see Pivot View Data Truncation.

Memory allocation management 

If you are pushing the boundaries of Omniscope's data capacity on a given machine, fine-tuning the memory allocation may help. You can override the default 75% share of memory allocated to Omniscope in favour of a higher limit you specify.

Omniscope Online

Omniscope Online, the zero-install Web Start version of the Omniscope free Viewer, does not currently dynamically adjust its configuration according to your PC's memory.  Instead a default memory ceiling is used, which limits the data capacity of the Viewer.  If you encounter out-of-memory errors using Omniscope Online, please use the customise link on the download page and enter a suitable value in the "Memory limit" box. You should enter approximately 75% of the installed memory of your target audience. You can then generate a parameterised download/deploy link to expose to your audience which includes this configuration.