Open Database Challenge Cup

Thank you for your insight and vision, you are absolutely “spot on”. I am the only user. Not quite sure what all the technical terms imply but I promise you, I wouldn’t open the tables. Once my three transaction items are processed it’s “set in stone”. In fact, with the exception of the “helper” cells, I regularly simply cut and paste special “values only” on much of the structure so I don’t recalculate 20000 cells every time I touch the enter key. I know LOCalc can cope with that tiny volume but to ensure I dont lose the input during a power outage I actually exported it to GSheets and perform the event processing on-line. Sheets Sucks, but then again, LOCalc is still a nightmare to try to save and open documents stored on a GDrive. :man_shrugging:

Thanks again - I learned a couple of things

I probably didn’t envisage replacing the Calc with a DB in a production capacity - it’s just that I kept hearing “You’re making a pig’s ear of it, you should be using a database” that I thought to myself - “You’ve got all these rows, why not use them as the source data for a DB experiment?”
Hopefully, I could then learn a couple of new things.

Incorrect assumption. I’ve known about BMI for decades. I could not determine that was what you were dealing with here especially with the misleading title of the post.

As for a database, most who are familiar with Base should see little problem in creating one. For someone without working knowledge of Base it will be a task. Also transferring existing data will be needed.

You completely missed the point.
I wasn’t dealing with BMI, that I could achieve with one simple formula and I implied that I had no idea what was my BMI at the time - primarily because I didn’t even care what was my weight at that time. I imagine you will still have to calculate your BMI after hopping on the scales - so my assumption was far from incorrect.

I was dealing with the intriguing nature of everybody’s weight change throughout a 24 hour period
I was simply seeking insight into the nature of normalisation and some hints as to which parts of my “template” would be considered the record. Should one record be one existing row or should one record be one day with the value for each of the existing “checkpoints” stored in that single record in a sequential order reflecting the event chronology?
In all honesty, I would have thought labelling things like “pre-breakfast”, “post-breakfast”, pre-dinner" with values like 74.5, 75.1, 75.3 would probably give a gentle hint that weight was being measured on a regular basis.
I guess I was wrong. In any event - Joshua provided the insight I sought.

BTW, @WhiteKnight, “pig’s ear” is a recursive relationship in a class diagram.

And someone wants you to figure it out… Referential integrity, anomalies of data deleting, inserting and updating, and so on. That’s a whole theory and it takes practice.

Edit: If you are the only user, then you can organize everything using a spreadsheet.

1 Like

Nope. Incorrect again. Easy to remember if good or not based upon current weight. Personally I don’t care as haven’t thought of this in years.

Which I thought was to convert something you had to use database. I seldom ask the persons motivation for doing a particular task as many may be questionable to me.

With “Events” seemed as something which occurred at the event. The numbers meant nothing to me. I deal with pounds.

British, informal. : to do or manage (something) badly He has made a pig’s ear of his reelection campaign.
Is it safe to assume that the recursive relationship definition has the same interpretation? It certainly looks from your diagram that it’s going around and around in an endeavour to disappear…

Now I understand. Whilst you may not always be right - you’re never wrong.

So you didn’t even read the first paragraph, that would be the one before the pretty picture.

Yes, I understood correctly. Just kidding. What is in the picture is somewhat reminiscent of a pig’s ear. It just so happened…
Снимок экрана от 2021-12-29 20-56-58

1 Like

You obviously have not seen enough of my posts. I have admitted to being wrong many, many times.

Have you had the “last word” yet?

Have you stopped insulting me?

Perhaps you could come back with that when you have understood the difference between insults and repartee.
Even the forum has woken to it

Encourage everyone to get involved in the conversation

You’ve replied 3 times to @Ratslinger in this particular topic!

A great discussion includes many voices and perspectives. Can you get anybody else involved?

And don’t forget, if you’d like to continue your conversation with this particular user at length outside of public view, send them a personal message.

Yeah, that’s repartee. I’m done. Just tried throughout to understand what you wanted and help.
.
Fire away. I won’t reply anymore.

Ok, so what you have here is a real-time logging system. There is a catch, however, because you don’t just want to see a log across time, you also want to compare log-events by category. You are recording events serially, but want to cross-check, say, waking weight with previous waking weight, etc. Then in all, you want to see where your variances are. This is a lot like load management. For example, say I want to record traffic over time on a certain stretch of road–well, obviously, I’m also going to be interested in how traffic changes based on nearby factory shift changes and weekdays.

There may be a Bluetooth scale with an app that already does this! (Who knows…)

But, to the point, you have to decide what’s the penny (or other local oxidizing currency). Here’s what I mean. Take twenty pennies and line them up by date at the near end of your desk, by your belly. Now push them toward the far end of your desk based on how shiny they are…the shinier, the further. What you expect is roughly a linear function, a line angling up from left to right. Any time you want to do an analysis, you always ask yourself “What’s the penny, and what are the characteristics I’m needing?” Here the penny was the penny, and date and shininess were the characteristics.

For your data, you have to decide what the penny is, and what characteristics are not derivable. For example, let’s say a “weighing” is the penny. Now, are the time and the meal status really distinct? If your analysis could tolerate zoning time to assume meal status, then no, they are really just time alone. If your analysis cannot tolerate that, then meal status becomes a characteristic of the weighing, like shininess for a penny. So, each weighing has 1) datetime (which is just ONE characteristic, since you can always derive weekday, etc.), 2) weight (obviously), and 3) meal status. Now, if your analysis requires comparing e.g. all pre-meal weighings no matter which meal, the meal status becomes 3) meal and 4) status.

Another solution (albeit perhaps more programmatic in its implementation) would be to make all weighings as simple as pennies: just 1) datetime and 2) weight. THEN record meals separately as each meal being 1) datetime and 2) type (lunch, dinner, etc.). Then the system would determine the rest.

Do you see how your analytic needs really determine how you set up the data?

If you wanted to learn some Base with this, you could decide what your penny is and what its characteristics need to be to do the analysis you want. I think the datetime/weight/mealstatus for each weighing is a good start. Then launch Base and check that you want a new database and that you want to go through the table wizard. Then thumb through the Business and Personal drop boxes in the table wizard and think about which one would map most closely to what you are doing. For example, there is a diet log and exercise log under Personal. You can start there and play with renaming things, etc., to see if you can build a database that has the data you need, then as time goes on learn more about aggregating that data for reports.

If I understood some of my studies correctly - and your posts have helped immensely - one of the most significant aspects of my sheet - the relative referencing of comparable data in a prior event (the daily moving average, daily growth, change on event & comparison with the previous day’s entry) are really bad practice in a database.
These movements are established at the time the entry is created and never change once the record is committed. I accept that they are “updated” every time I touch [enter] but they are created automatically.
If those values don’t exist in the DB (as I understand it, the records should not be utilising other records for their calculations), then there’s no point in the exercise.
The sheet is instructed to assess those values based upon their relative position in the row hierarchy. I can’t see a DB report being able to easily perform that operation without having to recalculate every “marker” to ascertain whether it matches my search criteria. Am I wrong?
A “data entry form” would require the relative reading of prior records to calculate the moving values and embed them into the new record.
One of the moving targets is today’s average which for every line has to recognise the hierarchy and read all the preceding lines of that day’s events - as you’re no doubt aware, aggregation processes like SUM(), AVERAGE(), etc., lose their compass when embedded in an IF() and the sheet is sorted or filtered.
I suspect I would save the effort of creating the eight - static - daily labels by storing them in a reference table and then fail miserably when it came to defining an input form that would hold yesterday’s full history and accumulate today’s movements and only create DB records after I commit the day’s events in their entirety, writing eight records - including their “analysis” values.
Seven of the dailies would just store their “own” comparisons and the “final” record for that day could also store the accumulated effect.
Two redundant “cells” in the first 7 daily events.
Forgive the rambling. I’m trying to identify the pros & cons as I go and just had the “Eureka” moment of defining an input form that holds yesterday’s record, accumulates the daily “event” movements and then commits 8 records to the DB once today is finalised. Obviates all the sorted/filtered array process confusion in IF().
Question: If I’m using an input form that “holds” seven parts of an incomplete daily commit, do I lose them if I exit the screen without committing or does LO perform a “holding” operation to ensure I can recover them?

A big yes and no. A database would be more-or-less perfect for what you are wanting to do, less the fact that you might have more inter-process communication than you would within Calc, since Base would go back to Firebird, or whatever underlying database.

Don’t be fooled by the billion-operations-per-second. Calc, a relational database system, or whatever “von Neumann” computer system, are basically going to do the exact same thing to process data at some level, and there is very little the end user (us) can do to optimize that process that the genius system designers haven’t already done.

It looks like “all” Calc “has to do” is “just pick” the post-lunch events, or whatever, because our parallel minds see those “right there” on the spreadsheet. But Calc never “sees” its own spreadsheets. It has to process the table of all events and copy or create a pointer list to matching events (post-lunch) just like a database would. Yes, you might happen on to a strategy that stopped certain cells from having to recalculate, but there’s no defined way to do that. You could put in accumulating values and end up just forcing Calc to do that much extra work on top of summing/averaging over all events meeting a criterion. I just had Calc generate 10,000 random numbers between 0 and 2,000,000,000 then average them. On a seven-year-old computer the little clock icon flickered on and off for well less than a blink of an eye and it was done.

The modern programmer almost never worries about “maintaining state” (intermediate values) just for optimization if computation and data are on the same physical computer system. It’s being done via the operating system (memory paging, etc.), but it isn’t a programmer’s concern. Even over networks most caching happens in the background. Back in the 80’s I remember having to segment data, bring it into the 64kb machine in chunks, etc., but now I just get a stream object of some sort and go.

Back in the 60’s I was lucky enough to have a history teacher who engaged his students and helped them develop their knowledge by encouraging their natural inquisitiveness. Your interaction encouraged me to read the BASE WIKI page by page - link by link. I now understand what you are saying and thank you for your patience.
Learned at least two things today :+1: :+1:

I wouldn’t have bothered myself either but as I mentioned, I upload it to GSheets for power outage safety and they, in turn, download all the effort to my browser. I can make a cup of tea while it just loads the sheet and then loses its compass for placement of the charts :man_shrugging: