I'm exploring the 'parameters' tab in Data Manager and I'm not sure if "I don't get it" or "it doesn't work yet"... or quite possibly both! :)
As I'm looking at it in DM 2.6 b379, I have 3 types of parameter I can configure:
A fixed value, or a type of my choosing... this seems limited as a 'parameter', but I guess it does give me the ability to change settings manually inside DM and have different behaviours (useful for dev/test/live environments etc.)
A value from my existing Omniscope model - I can see how this works (and can get it to work) but it seems to be of limited value at present... I guess I will think of uses in due course (so would be interested in how others are using this)
A value derived from another Omniscope - this seems VERY useful, in that I can use one Omniscope DM to drive another, and another... offers LOTs of user/group/division/dept/category type customisation... BUT I can't select another Omniscope from the drop-down list presented
In the third option I had expected to see either a file browser (to find other IOK files to use as parameter value sources), or some form of Task Manager, so I can pick another Omniscope instance(?!) - but nothing appears.
Any explanations on how parameters do/will work - with examples of how they can be utilised - would be VERY welcome... it seems to me it could be incredibly powerful, but I'm struggling to make the most of it right now...
This allows you to, for example, parameterise a database query (using custom SQL, or the filters tab of the database block), then easily change it from one place. You can also use the scheduler to change the parameter value.
This was also developed to support larger databases than Omniscope can hold at one time in memory. (Incidentally, the memory required to work on a file has recently dropped by up to 50% - you won't see this in Task Manager, but you will find that with the same memory cap, you can open more - feedback welcome). So you have a table of summary data, such as stocks, or products. And you have a massive timeseries table, such as stock prices, or product sales transactions.
You might open the summary data in one Omniscope, and a filtered database query to the timeseries data in the other. You configure the parameters so when the first Omniscope selects/filters to one or more stocks/products, the second Omniscope re-executes the database query to get the required timeseries/transactions. You configure auto-refresh in the 2nd file appropriately.
You can also do this in the same Omniscope, using frozen filters or queries to isolate the two tables of data, but with this model you can only filter to see the data.
The external app drop-down should show you a list of *other* Omniscope windows open on your PC (if any), plus any others in your local area network (if any). By default, you cannot see any details of the other Omniscopes, unless those users chose to publish the name of the file they have open, for example, using the Settings > Advanced > Network presence options.
Ideas for how to explain this or make this more intuitive are certainly welcome.
I have now had a play, and am coming to the conclusion that - clever as this all is - at this stage it doesn't suit the purpose I had hoped. I will try to explain the scenario:
1. We have a large database (tens of millions of rows) - far too big to load into Omniscope in one go (meaning that we cannot use Batch Output to 'chop' up the data for each user/group - each of which needs to see a different 'slice') 2. We would like to be able to iterate through a list of parameters and 'call' a DM 'build process' each time, passing one or more parameters (sounds like we might be able to use parameters, but...) 3. We need to grab a slice of data and run it through a standard clean/enrich/build (ETL) process and then save out a file into a folder... but the filename needs to be unique, and relate (in some way) to the parameters which were used to identify the 'slice' 4. In this way, we can 'roll across' a large database and output standardised output for every user/group (hundreds of them) in an automated way
I think I'm coming to the conclusion that we need to get you guys to take on some custom development work for us - I don't think this approach works using a combination of batch output / parameters... although it feels like this is / could be the direction in which you're heading?
Does anyone else see this 'custom/standard' (custom data set / standard model template) over large databases/cubes as a valuable use for Omniscope?
You can use XML actions to automate:
- open IOK file with DataManager model (the data in the IOK itself is irrelevant)
- change parameter value(s), e.g. changing a database query to select only a user's data
- publish all output blocks
You can write your own XML actions dynamically and execute them as you are familiar with. Doesn't this do what you need? Admittedly your use case isn't entirely provided by DataManager itself, but the possibilities of how you might use this (albeit with XML actions automation) are endless.
I'm not sure if the output filename for publish blocks is parameterised yet - you can work around this by exporting to a temp location then renaming/copying.
Yes - I think that probably *does* do it :) We need to order some more Enterprise CPLs before we can try this out, but it all makes sense... and I think I may have found a way to manage a single output block, using batch output and a dynamic definition file :)
It makes sense to me now, but I was struggling with the following:
1. Where can I use parameters? Having created some, and set values, where can I use them... I now understand that I can only use them (currently) in database blocks - it would be great to refer to them elsewhere (e.g. output blocks - in filenames, filter blocks etc.) 2. How do I set parameters from 'outside' Omniscope?... I now understand I can drive them from another Omniscope instance, or via XML in Enterprise 3. How do I 'get at' parameters from another Omniscope? I had thought this was a file-based operation... I now understand that I need another Omniscope instance running (and I need to enable it to share parameter values) 4. How do I control Batch Output?... I now understand that I can create a 'command file' and define a sequence of output files (or emails) based on filters etc... I have also figured out that I can create a correctly formatted 'command file' in a DM file, and derive rows from from a data source, using formulae to automate the production of all permissible filter values etc.
I'm still struggling, slightly, with how to stitch all this together for maximum control / minimum maintenance - but I think I'm well on the way now, and will be clearer still once we have additional Enterprise installations to experiment with.
OK, time for an update on this one :) It crosses over between DataManager and Enterprise/Scheduler, but here goes...
We have been doing a lot of testing with Enterprise 2.6 and attempting to drive DataManager models using Enterprise XML. The problem is (or appears to be) that the two don't fit together terribly well - DataManager introduces a 'fully managed history', in the sense that we can build, edit and generally control a complex set of data sources and operations... but the Enterprise XML is largely unaware of all of this (although there is a simple 'clear/retain history' flag in one of the commands), and using various Enterprise commands (e.g. edit/reconfigure data source) simply obliterates the entire DataManager model.
So, whilst I can set parameters using Enterprise XML (works well), or publish output files, I can't address any of the blocks inside my DM model. If I have a simple data source (e.g. a custom SQL statement from one SQL Server database) then I can process its contents in a myriad of ways in DM... but if I use Enterprise XML to change the location (server name, username, password, SQL table/view/function name etc.) of the SQL source, the entire DM process map is replaced with the new SQL details.
I understand that DM is a huge additional feature set, but at present it seems that we can either use DM OR Enterprise, but not both together. I also appreciate that designing an XML action approach to allow configuration of ALL DM blocks is a mammoth task, and is unlikely to appear in 2.6.
Interested in thoughts/views... have other 2.6 / Enterprise / DM users come across these challenges? Are there workarounds which I'm missing?
1. we make the fields you want to programmatically change to be parameterised (whether in source, operation or publisher blocks)
2. we add an enterprise action allowing you to "update source" for a single source block, by name.
Realistically, what fields do you actually need to programmatically change in your current use cases?
The second of your suggestions is probably the most flexible... and if/when other blocks appear in our sights - or anyone else's - then they could be tackled with the same approach.
So - the use case in question; we have a set of fairly straightforward analysis models which, at present, are fed from our regular design/source/model approach - namely:
1. Produce a BUILD IOK which organises data from one or more sources using DM 2. Output to a SOURCE IOK from the BUILD model 3. Build an analysis model (devoid of any calculations, merges etc.) from the SOURCE model
This approach has evolved along with DataManager... and is probably increasingly redundant given DM capabilities, but it's what we have in this case.
We now want to build the analysis model directly from its original data source i.e. a single SQL Server query in a single Database block (no multiple sources in this specific case). We want to do this because we are building multiple variants of the underlying model (using lots of different queries against the data source). So we want to replace a number of attributes of a single database block - for example, server name, instance name, SQL statement etc.
We have been trying to do this using the 'Replace data source' and 'Refresh data source' XML actions (apologies if I have the wrong names there, but you know what I mean)... but these commands replace the entire DM process map with a single database block that has the new settings.
If we could access a uniquely named data source block (easy in this case, as there is only one!) and change any of its settings - using an XML Action - then this would (I believe!) solve our last remaining hurdle.
Guy - I was forwarded a relevant email Ed to Mohamed which I'll respond to here:
"...it does seem a bit disjointed that in 2.6 desktop a data source is a single block pointing at a data source, but in Enterprise it interprets the whole data manager flow as the source..."
DataManager is the source for Omniscope. It also has nested sources. Pro and legacy files have a non-datamanager "simple source". 2.6 DM files have a DataManager "complex/nested" source. This is all necessary to support a non-DataManager Pro edition.
Thanks very much for the build, we have now downloaded and installed on our test environment. The new functionality of reconfiguring the DM source block does seem to work perfectly.