I should add: It is possible to update tables with relations intact, if you use SQL statement UPDATE, but I don’t think, this is the way you’d like to go.
Calc and Base are very different. Formulas in Base are not executed from a “cell” fomula, but from a macro script which is not stored in the table. For example: to sum column1 (cell1) plus column2 (cell2) and show the result in column3 (cell3) in Calc: the formula “=cell1+cell2” is written in cell3 where the result will be shown. In Base: a similar formula must written to an SQL query or a sub routine (macro) that executes when the user does something (event) with the mouse (click), keyboard (enter), or to a form control (listbox) based on column3 of the underlying table. It may be easier for now to stay in Calc and explore the built in Data / Validity method. Data can be easily recalled and duplicated using this tool! If you would like a small sample of this, reply and I will post a sample ods as a suggested solution. Cheers!
Sorry, that’s bullshit. The formula language of a relational database is SQL. It can do simple arithmetic out of the box, faster and more reliable than any spreadsheet.
Yes SQL also. (Edited) The point was that base tables do not execute calc formulas written in a column. The user has to “write” a macro (or SQL) to make fill a column with calculated results. I guess I missed the part in the poster’s question about being concerned with calc’s performance and reliability. If one wants to copy and paste or drag and drop or make calculations right in a column… LO Base is NOT the platform to do those things… Calc is!
You should never save calculated values in a table. You save the original data and calculate all by a query. So if you change values of the field you use for calculation the integrity won’t be lost:
Example:
“a” * “b”
In SQL:
SELECT “a”, “b”, “a”*“b” FROM “Table”
a=2; b=3; query will show 6
Now do this in Calc and copy the content from Calc (or created by a macro) into a Base file:
Result in the table would be 6. Looks well, but if you change
a=4; b=3; query will show 12 but the old value you inserted by Calc or macro will always be 6.
It is easy to get some result from a spreadsheet. However, it is incredibly hard to get correct results, particularly when the same spreadsheet is changed thousands of times over the years, collecting data, expanding references, copying around formulas, loading years old megabytes of data, formulas and formatting into memory before you can enter a new data point.
There is not a single certified accounting software that is based on spreadsheets. No IT professional would store real world data in spreadsheets. A spreadsheet is a handy tool for quickly drafted demos, presentations, what-if-scenarios, predictions based on real-world database data, database mock-ups. Your Amazon bill is not calculated with Excel nor is your investment report.
The first spreadsheet, Visicalc, has been invented 45 years ago as a pocket calculator on steroids where one can easily build up calculation models and see how the output changes as parameters are changing. The core functionality of Excel and Calc is still the same as in Visicalc: relative/absolute references, skalar functions working with numbers, scale lookups rather than database lookups, 2 data types number and text (“labels” in Visicalc). There is still no consistent way to store data in spreadsheets.
One of the problems most frequently discussed on spreadsheet forums is csv import, when people try to import csv data, originating from some database, into their arithmetic calculator and then try to continue working with database data as if their calculator was a database.
Same with Excel/Calc. In a perfect world, you keep data and formulas separated from each other, although the spreadsheet does not enforce any such separation.
You may need to convert calculated values into constant ones when you do some restructuring work on your calculation model or when you encounter performance issues due to many thousands of “database records” with endless cascades of VLOOKUP formulas.
In the real world, there are reasons to deviate from sound principles and store calculated values in a database, depending on how expensive the calculation is. In this case, a database provides the means to protect calculation parameters from being changed.
Invoicing sample with calculated amounts quantity * price
and sum(amounts)
:
Invoices_Articles_Prices.odb (165.1 KB)
A customer table.
Invoices storing the customer ID and invoice date.
A table of products.
A table of prices with a product ID and start date of a changed price.
A sales table storing which article has been sold on which invoice with a quantity figure.
The price of each sale is looked up by the article ID and the invoice date from the table of prices, multiplied by the quantity. The overall sum for each invoice is calculated either by means of a subform query or by the report builder which has its own calculation tools.
jtm,
This is coming from someone who has been there!
Use LO Base forms with SQL to add data to tables, not the tables themselves!
Lots of youtube videos in easy to understand formats to help you!
Thanks everyone for the answers.
I agree, now that I learned a bit more LO Base I understand that results should be not stored in the tables. But I still need Copy / Paste From Calc to Base because I need to migrate from others platforms that closed source like “S10 Presupuesto” and others ERP for construction works budget.
Here is a sample: (real case)
In my work, I received PDF files with the costs of construction:
Individual items:
01. BLOQUE 01 - LISTADO DE INSUMOS.pdf (161.8 KB)
unit costs:
04. INSTALACIONES ELECTRICAS.pdf (336.0 KB)
budget:
02. BLOQUE 01 - PRESUPUESTO.pdf (85.4 KB)
In Excel, I converted all pdf to table format using Excel and notepad++, then exported metering data from BricsCAD / AutoCAD, then used =filter formula to link all tables with prices, then used Pivot tables to do similar to when LO Base do Query, and the result is this:
PROPUESTA PRESUPUESTO - 01_10_23 - rev01.xlsx (238.0 KB)
Normally this is done with closed source ERP software, but I found is already possible a migration from these software programs to Excel / LO Calc using relationship tables, I mean LibreOffice Calc can alreadt replace completely ERP softwares and Excel, at least for me.
The next step is the performance, In both cases, in Excel or LO Calc, when try to calculate formulas, after I try to update data / table / row / columns, the result is very slow only with 10000 rows, and this is only a small project.
Now I found LO Base as an alternative to this task expecting updating millions of rows / columns and thousands of tables in a future, the only problem I found is that If I rewrite a row / column, or move columns / rows is very slow change each cell individually, because it is not possible to manage tables like LO Calc.
Delete all rows / columns, and copy / paste to same tables from LO Calc to update rows / columns solve this for me, but I think this is not necessary, If LibreOffice already have those features in LO Calc, Why not implement in LO Base to speed up workflow? At least optionally. But I am new in SQL Data bases, maybe is there another method to manage this task?
Because Base is not "the Database of LibreOffice. It is a connecting Module to Databases, wich unifies the access. The database can be a quite simple datasource like a bunch of csv-file, dbase-files or even a Calc-Table. But it can also be a full database server like Postgres/MySQL/MariaDB wich may not even be on your computer/network, but “somewhere on the internet”. ODBC and JDBC-drivers can connect to databases never known to developers of Base/LibreOffice. So Base can only interact with these databases and often not directly modify data.
.
Also consider: This are multi-user databases. So you are usually not expected to delete tables, wich other users may access at the same time…
.
At the moment I guess you will use the default “HSQLDB embedded”-database. This is a database written in java, where all the data can be put in your .odb-file, when you close the database and will be extracted for access, when you open the file again. This type is very useful to send around a “complete database” by email/ftp etc., but not recommended for productive databases.
Usually a database is/should be a carefully designed structure which is seldom changed. And you usually try to have few tables: So not “table of outlets in unit6” etc. but a single “table of outlets” for all units with a UnitId as column and selection is done in a form/query by WHERE UnitId='unit6'
.
On the other hand you may find a lot of tables, when you find out database-designers try not to duplicate data and may spread a “simple” address of a customer over several tables using a table for all Streets and a table of cities…
.
Actually most databases have also techniques for mass-updates and imports from csv etc., but these methods vary a bit. So you need to find out, what suites your need. Generally: Updating rows is usually quite efficient, columns can be more tedious.But there are special databases like DuckDB, wich are more efficient for this tasks (google for columnar database, if you like). Some of the “bigger” projects have different engines for storage, wich may be selected during design of thr database.
Is a database, obviously. https://www.youtube.com/watch?v=af9wagkAH6s
Find out the type of database and where it is stored on your computer. If there is some ODBC or JDBC driver for this database, you can connect a Base document to it.
jtm,
I’m working on this… and I will update this post as I progress!
Goal: Import the xlsx spreadsheets to LO Base table to use in forms and queries.
- Open the xlsx file with LO calc… then save as an ods file. Alternatively, use the document converter found in the File>Wizards menu of the LO main window.
- Prepare the ods spreadsheets for import into Base… primarily focused on deleting empty columns and rows (especially the empty top row above column names). In sheet ‘APU’, the PARTIDA column needs to have the PARTIDA group name in each row of the corresponding ‘TIPO’. (fill down for each group).
(Some previous info has been deleted here…) - Insert two columns into each sheet at positions 2 and 3 named ‘id’ and ‘fkey’. Save the ods. Should be ready for Base import.
- With the odb open in LO and the 'Tables" tab selected, open the prepared ods. On the first sheet, select the entire sheet (upper left corner square) or select all the columns with mouse. Copy the selection. Head over to the odb window. In the Tables area, right click and paste from the popup menu. If paste is not an option, then something is wrong… go back to select and copy the ods sheet again. Upon pasting the ods into odb tables, a dialog box will pop up. Name the new table similarly as the sheet it was copied from. (“importSheetName”). Select the first option to import definition and data (do not create ID), “use first row as header” should be already selected for you… then Next.
- (Some instructs removed here…) Inspect the Text columns and verify their Lengths are adequate for incoming data… then Create. Do not create primary key! The “id” already exists. If an error occurs, you will need to verify each previous step… but most likely the error is a result of some imported data strings being to long for the selected column length. Verify that the Length of the text columns are more than enough to accommodate the data. (Length many later be reduced) Also that the field type matches the type of data being imported. (ex: no text data allowed in Numeric field type) If all is well, the new table will appear, with complete data, in the list of Tables of the odb! Save the odb!
- Repeat for each of the sheets in the ods spreadsheet. These imported tables are the “building blocks” for the actual relational tables we will create for organizing, editing, and viewing data. There should now be 4 imported Tables in the odb.
Updated black friday version… see if this helps!
UNCP.odb (339.9 KB)
Nope. 2,147,483,647 (231 - 1) is the maximum string length for a JAVA string as used by the underlying HSQLDB. For unlimited strings, there is the Memo field type.
Data types should be as large as necessary, as small as possible.
P.S. I see, you are ignoring the bottom part of the table design window where you can define the length of text fields and make them mandatory (disallow empty values).
@Villeroy : I have had a look why so much people think there is a limit at 255 characters: The import wizard allows maximum length 255, will set back longer values but if you set Type → Automatic it will recognize there is more content needed. A little bit buggy…
Thanks RobertG,
Where does one find the Type–Automatic setting?
As stated before, the day one fails to learn something is probably the day one should stop all together!
Villeroy,
I am ignoring hours worth of stuff to post a very rough example of what jtm may be able to get a hold of and tune it in on their own. Give someone a fish to eat and they are fed for a day, teach them to fish and they will feed themselves. Did you expect me to come back with a finished product overnight? I don’t see where you have offered any useful help to jtm whatsoever!
A database with column types VARCHAR(255) NULL
does not help anybody. Base wizards do not help anybody (except report wizard).
I would try to connect a Base document to the backend used by the frontend “S10 Presupuesto”. This would unveil all tables, columns, relations and indices and all the data within that structure within minutes.
unless they “protectected” access to this by password…
.
Googling a bit on the topic didn’t give much, but my top try would be MS-Sql-Server.
Edit: The assumption is confirmed when one combines "S10 Presupuesto” and “SQL-Server” for the google search.
https://www.udemy.com/course/presupuestos-de-obras-con-s10-presupuestos/?couponCode=BFCPSALE24 is a video that shows how to log in as a user named “sa” which I would translate as “system administrator”.