Squashing database deltasΒΆ
In Database deltas, you created two deltas in one database. Big, long living projects tend to accumulate a lot of updates to more than one database. This section shows what happens when Dsynq applies multiple deltas to an empty database and how you can optimize dsynq delta apply
by squashing database deltas up to a certain time point.
First, check out the data.
> dsynq checkout
receiving incremental file list
receiving incremental file list
Checkout done
Data for 6c1408ba8450e325f041b8fccb924d9e6da92a19 checked out
When finished, don't forget to check in the data to unlock it for others
Note
State 0 is a special state preceding all possible deltas. It represents an empty table.
Use dsynq delta reset
to transition to state 0 which means removing all the data from a table.
If you specify no database or table, Dsynq resets all the tables in all the databases in the project.
> dsynq delta reset
Resetting delta(s) from database #1...
Database: hello_dsynq_db
Deleting all data from the tables...
Deleting data from table "item"...
Database delta reset successfully!
> mysql -u root hello_dsynq_db -e "SELECT * FROM item;"
Transition all the way forward from state 0.
> dsynq delta apply
Applying delta(s) to database #1...
Database: hello_dsynq_db
Transitioning to database delta @1528442433.821985...
Inserting...
Inserting into table "item"
Updating...
Updating table "item"
Deleting...
Deleting from table "item"
Transitioning to database delta @1528442435.056242...
Inserting...
Inserting into table "item"
Updating...
Updating table "item"
Deleting...
Deleting from table "item"
Database deltas applied successfully!
> mysql -u root hello_dsynq_db -e "SELECT * FROM item;"
id code name
1 100 One
3 30 Three
Dsynq applies database deltas in the chronological order, one by one. This can take a long time, especially if the deltas affect many rows. If some rows are inserted only to be deleted later or receive updates multiple times, it makes no sense to roll all of the deltas literally.
Use dsynq delta squash
to get rid of this redundancy. This command rolls up the sequential deltas up to a certain time point into one and saves the result as a new database delta.
> dsynq delta squash -D hello_dsynq_db
Squashing delta from database #1...
Database: hello_dsynq_db
New database delta @1528442435.056242_0
Saving table "item"...
The original deltas are still available after squashing until you explicitly delete them.
View the resulting outline after squashing.
> find data/.databases/
data/.databases/
data/.databases/hello_dsynq_db
data/.databases/hello_dsynq_db/delta
data/.databases/hello_dsynq_db/delta/1528442433.821985
data/.databases/hello_dsynq_db/delta/1528442433.821985/.meta
data/.databases/hello_dsynq_db/delta/1528442433.821985/.meta/.dsynqmeta.json
data/.databases/hello_dsynq_db/delta/1528442433.821985/item.d.csv.xz
data/.databases/hello_dsynq_db/delta/1528442433.821985/item.i.csv.xz
data/.databases/hello_dsynq_db/delta/1528442433.821985/item.u.csv.xz
data/.databases/hello_dsynq_db/delta/1528442435.056242
data/.databases/hello_dsynq_db/delta/1528442435.056242/.meta
data/.databases/hello_dsynq_db/delta/1528442435.056242/.meta/.dsynqmeta.json
data/.databases/hello_dsynq_db/delta/1528442435.056242/item.d.csv.xz
data/.databases/hello_dsynq_db/delta/1528442435.056242/item.i.csv.xz
data/.databases/hello_dsynq_db/delta/1528442435.056242/item.u.csv.xz
data/.databases/hello_dsynq_db/delta/1528442435.056242_0
data/.databases/hello_dsynq_db/delta/1528442435.056242_0/.meta
data/.databases/hello_dsynq_db/delta/1528442435.056242_0/.meta/.dsynqmeta.json
data/.databases/hello_dsynq_db/delta/1528442435.056242_0/item.i.csv
data/.databases/hello_dsynq_db/delta.rev
data/.databases/hello_dsynq_db/delta.rev/1528442433.821985
data/.databases/hello_dsynq_db/delta.rev/1528442433.821985/item.d.csv.xz
data/.databases/hello_dsynq_db/delta.rev/1528442433.821985/item.i.csv.xz
data/.databases/hello_dsynq_db/delta.rev/1528442433.821985/item.u.csv.xz
data/.databases/hello_dsynq_db/delta.rev/1528442435.056242
data/.databases/hello_dsynq_db/delta.rev/1528442435.056242/item.d.csv.xz
data/.databases/hello_dsynq_db/delta.rev/1528442435.056242/item.i.csv.xz
data/.databases/hello_dsynq_db/delta.rev/1528442435.056242/item.u.csv.xz
Now dsynq delta apply
has to detect the squashed delta and apply only it.
> dsynq delta reset
Resetting delta(s) from database #1...
Database: hello_dsynq_db
Deleting all data from the tables...
Deleting data from table "item"...
Database delta reset successfully!
> dsynq delta apply
Applying delta(s) to database #1...
Database: hello_dsynq_db
Transitioning to database delta @1528442435.056242...
Making the reverse database delta for @1528442435.056242_0...
Inserting...
Inserting into table "item"
Updating...
Deleting...
Database deltas applied successfully!
> mysql -u root hello_dsynq_db -e "SELECT * FROM item;"
id code name
1 100 One
3 30 Three
Instead of two delta applies, this time Dsynq has to do only one. Besides, the squashed database delta does not include item Two as it turns out to be redundant. Item One comes with the correct code 100
right from the start.
Finally, check in the data.
> dsynq checkin -m "squashed hello_dsynq_db"
sending incremental file list
sending incremental file list
.databases/hello_dsynq_db/delta.rev/
.databases/hello_dsynq_db/delta.rev/1528442435.056242_0/
.databases/hello_dsynq_db/delta.rev/1528442435.056242_0/item.d.csv.xz
76 100% 0.00kB/s 0:00:00
76 100% 0.00kB/s 0:00:00 (xfr#1, to-chk=16/30)
.databases/hello_dsynq_db/delta/
.databases/hello_dsynq_db/delta/1528442435.056242_0/
.databases/hello_dsynq_db/delta/1528442435.056242_0/item.i.csv.xz
104 100% 101.56kB/s 0:00:00
104 100% 101.56kB/s 0:00:00 (xfr#2, to-chk=2/30)
.databases/hello_dsynq_db/delta/1528442435.056242_0/.meta/
.databases/hello_dsynq_db/delta/1528442435.056242_0/.meta/.dsynqmeta.json
147 100% 143.55kB/s 0:00:00
147 100% 143.55kB/s 0:00:00 (xfr#3, to-chk=0/30)
Compressing database delta(s)...
Compressing database delta(s) in database #1...
Database: hello_dsynq_db
delta/1528442435.056242_0
delta.rev/1528442435.056242_0
Database delta(s) compressed
Data checked in successfully!