Open Database Challenge Cup

Now I understand. Whilst you may not always be right - you’re never wrong.

So you didn’t even read the first paragraph, that would be the one before the pretty picture.

Yes, I understood correctly. Just kidding. What is in the picture is somewhat reminiscent of a pig’s ear. It just so happened…
Снимок экрана от 2021-12-29 20-56-58

1 Like

You obviously have not seen enough of my posts. I have admitted to being wrong many, many times.

Have you had the “last word” yet?

Have you stopped insulting me?

Perhaps you could come back with that when you have understood the difference between insults and repartee.
Even the forum has woken to it

Encourage everyone to get involved in the conversation

You’ve replied 3 times to @Ratslinger in this particular topic!

A great discussion includes many voices and perspectives. Can you get anybody else involved?

And don’t forget, if you’d like to continue your conversation with this particular user at length outside of public view, send them a personal message.

Yeah, that’s repartee. I’m done. Just tried throughout to understand what you wanted and help.
.
Fire away. I won’t reply anymore.

Ok, so what you have here is a real-time logging system. There is a catch, however, because you don’t just want to see a log across time, you also want to compare log-events by category. You are recording events serially, but want to cross-check, say, waking weight with previous waking weight, etc. Then in all, you want to see where your variances are. This is a lot like load management. For example, say I want to record traffic over time on a certain stretch of road–well, obviously, I’m also going to be interested in how traffic changes based on nearby factory shift changes and weekdays.

There may be a Bluetooth scale with an app that already does this! (Who knows…)

But, to the point, you have to decide what’s the penny (or other local oxidizing currency). Here’s what I mean. Take twenty pennies and line them up by date at the near end of your desk, by your belly. Now push them toward the far end of your desk based on how shiny they are…the shinier, the further. What you expect is roughly a linear function, a line angling up from left to right. Any time you want to do an analysis, you always ask yourself “What’s the penny, and what are the characteristics I’m needing?” Here the penny was the penny, and date and shininess were the characteristics.

For your data, you have to decide what the penny is, and what characteristics are not derivable. For example, let’s say a “weighing” is the penny. Now, are the time and the meal status really distinct? If your analysis could tolerate zoning time to assume meal status, then no, they are really just time alone. If your analysis cannot tolerate that, then meal status becomes a characteristic of the weighing, like shininess for a penny. So, each weighing has 1) datetime (which is just ONE characteristic, since you can always derive weekday, etc.), 2) weight (obviously), and 3) meal status. Now, if your analysis requires comparing e.g. all pre-meal weighings no matter which meal, the meal status becomes 3) meal and 4) status.

Another solution (albeit perhaps more programmatic in its implementation) would be to make all weighings as simple as pennies: just 1) datetime and 2) weight. THEN record meals separately as each meal being 1) datetime and 2) type (lunch, dinner, etc.). Then the system would determine the rest.

Do you see how your analytic needs really determine how you set up the data?

If you wanted to learn some Base with this, you could decide what your penny is and what its characteristics need to be to do the analysis you want. I think the datetime/weight/mealstatus for each weighing is a good start. Then launch Base and check that you want a new database and that you want to go through the table wizard. Then thumb through the Business and Personal drop boxes in the table wizard and think about which one would map most closely to what you are doing. For example, there is a diet log and exercise log under Personal. You can start there and play with renaming things, etc., to see if you can build a database that has the data you need, then as time goes on learn more about aggregating that data for reports.

If I understood some of my studies correctly - and your posts have helped immensely - one of the most significant aspects of my sheet - the relative referencing of comparable data in a prior event (the daily moving average, daily growth, change on event & comparison with the previous day’s entry) are really bad practice in a database.
These movements are established at the time the entry is created and never change once the record is committed. I accept that they are “updated” every time I touch [enter] but they are created automatically.
If those values don’t exist in the DB (as I understand it, the records should not be utilising other records for their calculations), then there’s no point in the exercise.
The sheet is instructed to assess those values based upon their relative position in the row hierarchy. I can’t see a DB report being able to easily perform that operation without having to recalculate every “marker” to ascertain whether it matches my search criteria. Am I wrong?
A “data entry form” would require the relative reading of prior records to calculate the moving values and embed them into the new record.
One of the moving targets is today’s average which for every line has to recognise the hierarchy and read all the preceding lines of that day’s events - as you’re no doubt aware, aggregation processes like SUM(), AVERAGE(), etc., lose their compass when embedded in an IF() and the sheet is sorted or filtered.
I suspect I would save the effort of creating the eight - static - daily labels by storing them in a reference table and then fail miserably when it came to defining an input form that would hold yesterday’s full history and accumulate today’s movements and only create DB records after I commit the day’s events in their entirety, writing eight records - including their “analysis” values.
Seven of the dailies would just store their “own” comparisons and the “final” record for that day could also store the accumulated effect.
Two redundant “cells” in the first 7 daily events.
Forgive the rambling. I’m trying to identify the pros & cons as I go and just had the “Eureka” moment of defining an input form that holds yesterday’s record, accumulates the daily “event” movements and then commits 8 records to the DB once today is finalised. Obviates all the sorted/filtered array process confusion in IF().
Question: If I’m using an input form that “holds” seven parts of an incomplete daily commit, do I lose them if I exit the screen without committing or does LO perform a “holding” operation to ensure I can recover them?

A big yes and no. A database would be more-or-less perfect for what you are wanting to do, less the fact that you might have more inter-process communication than you would within Calc, since Base would go back to Firebird, or whatever underlying database.

Don’t be fooled by the billion-operations-per-second. Calc, a relational database system, or whatever “von Neumann” computer system, are basically going to do the exact same thing to process data at some level, and there is very little the end user (us) can do to optimize that process that the genius system designers haven’t already done.

It looks like “all” Calc “has to do” is “just pick” the post-lunch events, or whatever, because our parallel minds see those “right there” on the spreadsheet. But Calc never “sees” its own spreadsheets. It has to process the table of all events and copy or create a pointer list to matching events (post-lunch) just like a database would. Yes, you might happen on to a strategy that stopped certain cells from having to recalculate, but there’s no defined way to do that. You could put in accumulating values and end up just forcing Calc to do that much extra work on top of summing/averaging over all events meeting a criterion. I just had Calc generate 10,000 random numbers between 0 and 2,000,000,000 then average them. On a seven-year-old computer the little clock icon flickered on and off for well less than a blink of an eye and it was done.

The modern programmer almost never worries about “maintaining state” (intermediate values) just for optimization if computation and data are on the same physical computer system. It’s being done via the operating system (memory paging, etc.), but it isn’t a programmer’s concern. Even over networks most caching happens in the background. Back in the 80’s I remember having to segment data, bring it into the 64kb machine in chunks, etc., but now I just get a stream object of some sort and go.

Back in the 60’s I was lucky enough to have a history teacher who engaged his students and helped them develop their knowledge by encouraging their natural inquisitiveness. Your interaction encouraged me to read the BASE WIKI page by page - link by link. I now understand what you are saying and thank you for your patience.
Learned at least two things today :+1: :+1:

I wouldn’t have bothered myself either but as I mentioned, I upload it to GSheets for power outage safety and they, in turn, download all the effort to my browser. I can make a cup of tea while it just loads the sheet and then loses its compass for placement of the charts :man_shrugging: