I must be lucky, I don’t get those “features” so I don’t need to make allowances fior them and I don’t think I will ever need to have all my accounts in the same “Database” as they are not related in any way so it’s safe enough to keep them in separate files. Mind you, you’ve inspired me. As they’re identical structures I may just pour them all into the same pot to see how it performs - as a spreadsheet😉
I sometimes use this approach but my lack of experience of both products tends to leave me floundering with the comparable syntax. Simple solution: Download the sample file and open it with LOCalc. Automatic transliteration
You can do that too. The main thing is to know that there are no “smart” tables in LO Calc (those that are self-expanding, the MS Excel ListObject).
There are a lot of resources. For example, this one.
I’ve filed a bug on this because if you create two autofiltered data tables with a few rows between the end of Table one and the beginning of Table two and inadvertently enter values into the cells between the two tables then their indices are merged and you end up with the headers and data rows for table two embedded in the data rows for table one the moment you activate a filter. This can be obviated by creating a buffer row between the tables with a ridiculous data validation scheme, thereby preventing ANY data entry.
Edit:The same anomaly doesn’t manifest with adjacent tables - they retain their individuality.
Anything related to record set is not a sparrow. And a simple database is not a cannon.
Create a compilation from a table Attachment(s)
Where is the sparrow and the cannon in this most recent example?
Access was in many cases not part of the installed package. And MS-Word was sold seperatly, so had to be able to solve mail-merge alone, without Excel or Access. For LibreOffice it was no problem to rely on a odb-File as connection to a csv, never opened by writer. So the ecosystem “forced” solutions.
I really like to combine both worlds. Having a database guarding my data and keeping types tidy, but I can drag any query to a spreadsheet and analyze from there.
And with a fairly well thought out Calc I have a 100% adequate and accurate record, with the added advantage of a real-time adaptive dashboard.
If a Db needs to select a segment of the data, an enquiry has to be constructed and tested for integrity - more or less the tedious equivalent of an autofilter.
If a Db is required to be analysed for both historical and prospective trends, the data has to be extracted and transferred to a Calc - skip that, Calc already contains charting and analysis tools with a splendid selection of trend analysis possibilities.
If, after 10000 rows, I discover a superior function or even decide to change the methodology for every calculated cell in the row, I can change one row or any part of one row until I’m happy with the result, select and double-click the resulting marquee and the entire sheet is redefined and recalculated.
Did I say 10000? I’ve experimented with replications to many tens/hundreds of thousands with the same efficacy.
How long does it take to achieve that with a Db?
If I decide I want a radical new design and start inserting columns into the structure, all my dependencies are automatically updated, all the references anywhere in the document tag along - just for the fun of it.
I fail to understand just what is so evil about a mechanism designed for the masses at great cost in both financial and creative energy terms and producing a satisfying result from a germ of an idea.
As they say, a camel is a horse produced after the committee gave the wrong instructions to a team of developers who had little or no idea what the ultimate user wanted (or needed).
With a Calc, I get to be everybody in the supply chain so I get what I want - with occasional help from people who know more than I.
I like @Villeroy’s posts, I read it regularly. But I am well versed in both databases and spreadsheets. I find those areas where both products are useful.
@WhiteKnight: So didn’t have you a desire to thoroughly study Mike Girvin’s book (“Ctrl+Shift+Enter Mastering Excel Array Formulas: Do the Impossible…”?; 2013)?
MS Excel has long gone ahead in the field of dynamic arrays & lambda functions, so this book is right for us. For them, this is no longer so relevant.
NOTE: In general, I think that the best assistant and source of information about LO Calc are MS Excel expert sites. I come here for macros (even if it’s evil).
There are too many similarities between these two products at the level of LO Calc functionality.
But I don’t use MS Office anymore due to the transition to Linux.
As do I. Even if he does (over)promote Dbs he still offers meaningful insight into the resolution of the issues I raise.
I found the pdf you hinted existed and I’m still studying it. Must confess I still struggle with array functions because they tend to be more sophisticated (and harder) than anything I’ve produced so far. I did raise an issue earlier of whether anybody thought what I was doing merited the sophistication (and power) of array processes but nothing convincing was concluded.
I also have a number of them bookmarked but this is the only site where I post queries as it’s the product I favour. I have a few contacts who sometimes ask me to produce a “quick and dirty” analysis of or calculator for something they’re interested in and they all use eXcel so I often export a LOCalc to Excel for them.
I thought these looked quite amazing so I filed an enhancement request. I often have to break my issue down into three or four cells to get the right syntax and then combine those into a single cell formula - Lambda would save me the trouble.
I’ve loaded Ubuntu onto a spare notebook so I could experiment, but I’m one of those old dogs finding it difficult to revert to the shell type environment I moved away from with the advent of Windows 3. I think my biggest drawback is that I have a limited amount of time for assimilating replacements for things that are nowadays almost a habit. Not enough time left for the appropriate learning curve.
SELECT * FROM datatable WHERE ( date > :start )
Im writing in the same time as i click Auto- or other Filters
I don’t try to convince you, if Calc is all yiu need.
In my tests calc was unusable for 10000 rows due to ARRAY-Formula overvthe whole set if rows. Separate data in db solves my problem.
Sturctural changes are quick for simpler cases as you describe.
If you need to publish data on the web it is nice, if you already use MariaDB , but I aslo have created html directly from calc…
I’m still not up to speed with ArrayFormulae so I probably just got lucky using rows that resembled small Db records. Literally, the only Array style function I use - SUBTOTAL() - is perfect for my (filtered) needs.
All my stuff is for personal consumption so I only learned how to make an API call to get FOREX and create Hyperlink buttons for quick access to info already on the internet.
EDIT: And I soon learned not to update too frequently or it was necessary to undo tens of FOREX updates to undo a mistyped cell value
I’m wading through it now and probably cheating a little in that I’m jumping straight into the example files to see if any of them actually perform tasks that I’m currently interested in incorporating into any of my sheets.
I’m particularly concerned with the accuracy of his example 2.1 A math operation on single items where he claims to be Calculating change in stock value to ascertain the Max.
E4 indicates a drop in value of 8 whereas E5 indicates a gain of 7. Correct me if I’m wrong but surely 8 is greater than 7 so what he’s testing for is the greatest GAIN, not the greatest CHANGE. Would an array function permit the evaluation of ABS() to return the correct result?
EDIT: And obviously then reconfigure the result to indicate -8 as the Max change
It’s hard for me to say. I’m out of context.
I wanted to say that this book made a stunning impression on me.
It’s waking neurons I never knew I had