Hi Gurus,
I have 12 named arrays each with 1 Column and 4200+ rows.
The “active” row count grows at the rate of 8 per day which I predefine every month and “manage” names by adjusting the end value.
Originally (and many moons past), I simply inserted 250 rows inside the boundary of the array and any formulae would adjust automatically to the new “ends of rows”.
I’ve noticed that when I open the file, I can watch the loading progress bar incrementally stepping along the bottom of the screen.
Is this “stepping” some form of indication that it’s assessing and building all the named arrays and as they’re thousands of rows deep, possibly requiring more time than a simple range nomenclature of say =subtotal(9;A1:A4250) compared with =subtotal(9;ColA)?
It’s not really an issue of speed, just me trying to understand if some features are less efficient than their alternatives.
Edit: Just recalled, it’s not the name range I have to manually adjust when inserting the “new” rows it’s the monumental cock up that occurs in the Conditional Formatting register when a formula array of 12x8 is replicated for the “new” rows. The efficacy question is the issue.
To figure out what happens “on load”, causing significantly increased load time when adding rows, a sample file is useful.
Could you make a sample with one or two “day blocks”? Either a copy of your current file, or one with dummy data if the real content is sensitive (confidential, intellectual property, or otherwise revealing something you would rather not disclose).
There is almost no difference during execution time between using direct references and named ranges, in the formula expression the executable tokens for named ranges (or generally named expressions for that matter) use the references already. There is a very smallish indirection (lookup of name and adding its tokens) when compiling the formula. There’s no difference in what arrays are or are not created if named ranges are used. Whether a cell range needs to be converted to an array is a property of the function it is an argument of. Specifically SUBTOTAL() does not need any arrays be forced, it iterates over cell ranges given.
Any insight on what the startup progress bar is indicating would be appreciated.
For comparison, The file I’m referring to as having visibly noticeable increments during production of the progress bar is half the size of another with virtually identical row count but fewer columns. Both files have embedded charts but the larger file only shows perhaps 3-4 incremental steps to the progress bar compared with at least 12 slower steps on the smaller file.
Is the conditional formatting occurring in the smaller file a likely contributory factor in the number of visual increments to the progress bar?
tdf#124098 ?
Not really, Not getting error messages and I didn’t mean to infer that it was seriously slowing it down, just that the smaller file with more “visual” effects and labels also demonstrated a slower load progress with more incremental steps but probably not amounting to so much as an extra second.
Also, the file has more grouped and hidden columns both in number and clusters so that was probably going to be my next line of enquiry.
Which error messages you mention?
@erAck, this seems like a logical place to ask this how-come question: If we create a UDF macro that stores a value as a named value to do perverse things like have intermediate stored values in a large formula, why is Calc so very slow at it? Creating one variable and calling it back a couple times happens at human speeds.
Sorry, Mike, I misinterpreted ‘It writes “adapt Row Height”’ as an implication of LO providing an error message
I don’t know what you are talking about. Best create a new topic and attach a sample.