Having read the data import help guide I change the file extension to .csv, removed the automated formrating option and header check box but the import still failed. I am importing a text file, over 1gb in size but didn't consider it might fail. It did at 50%. There are only a dozen columns.
Are you using 2.5 or 2.6? What full version of Omniscope? 32bit or 64bit? What kind of PC do you have? How much installed memory? What operating system?
Thank you for submitting this as a bug. From the data we have received it appears that you are using a fairly old version of Omniscope 2.5. Could you upgrade to the latest version of 2.5 or Omniscope 2.6, which contains a lot of new features for improving these types of processes.
Unfortunately you are also running the 32bit version of Omniscope. It is advisable for a file this size that you use 64bit Omniscope and have as much memory as possible installed.
Thanks. I am restricted by the version that is packaged by my company I'm afraid. I have yet to identify how to get the newer version made available to me.
I since determined that it is purely a memory issue. I am attempting to open a file for a new colleague who has no idea of the content. It would seem to be 1.3m records with 220 columns (282m cells!?!) Not sure any standalone PC has the amount of memory required for such a file.
I am running Intel Core 2 Duo CPU P8600 @ 2.4Ghz with 2.96gb Ram. How many cells should I be looking to reduce the file to in order to make it manageable? I can't cut down on rows but I should be looking to reduce columns.
When you open a file you can request a skip of the first x number of row. I don't suppose you can specify the row numbers you want to include in the import. I could then merge multiple IOK files from slective imports. The smart money would manage this from the original data extract but I rather suspect that Omniscope would actually be an easier solution if this were possible.
You must install the 64bit version of Omniscope in order to be able to work with this volume of data.
In Omniscope 2.6 we provide a tool that allows you to estimate the amount of memory required to open a specific file. This can be accessed by navigating to 'Settings>Advanced>Estimate memory use'.
The amount of memory taken up by the file will depend on the number of text/date/number fields, the number of records, the number of unique values in each text field and the average length of the values in a text field. To give you an example, a file with :
- 1.3m records - 220 columns comprised of 110 number fields and 110 text fields - An average of 100000 non unique values in the text fields - An average length of 50 characters in the text fields
Will require 1.6GB of memory. A typical modern standalone PC would have more than enough. If you are working with this type of data on a regular basis we would recommend 8GB of ram.
Could you provide us with a rough guide to the types of data contained inside each of the fields in your dataset?