Begin transmission…
Postgres exports using DuckDB
We mentioned earlier that we are exploring Postgres streaming replication in the Basin project. Here, I want to discuss using Duck DB instead of streaming replication for the same purpose.
DuckDB is an in-memory analytical data management system. Similar to SQLite, it is an embedded database. However, DuckDB is further optimized for analytical workloads and can efficiently handle complex queries on large datasets. It is an excellent choice for performing complex queries on large volumes of data directly from within an application.
DuckDB has a thriving ecosystem of extensions. Here, we will look at two of those: postgres-scanner and parquet. Combining these two extensions, we can export table backups to a parquet file with only a few lines of code. Here is an example in Rust:
let conn = Connection::open_in_memory()?;
// Initialize the Parquet extension
let load_parquet_extension = "INSTALL parquet; LOAD parquet;";
conn.execute_batch(load_parquet_extension)?;
// Initialize the Postgres extension
let load_pg_extension = "INSTALL postgres_scanner; LOAD postgres_scanner;"; conn.execute_batch(load_pg_extension)?;
// execute the postgres scan
let pg_query = "SELECT * from postgres_scan('dbname=mydb user=myuser password=password host=localhost port=5432', 'public', 'mytable')";
let copy_query = format!("Copy({}) TO 'fids.parquet' (FORMAT 'parquet');", pg_query);
conn.query_row(copy_query.as_str(), [], |row: &Row<'_>| {
println!("row: {:?}", row.get::<_, usize>(0)); Ok(())
}).unwrap();
At the current stage, Basin copies the data from a table and archives them in Filecoin. We could do the same using DuckDB extensions. However, there are tradeoffs involved here.
First, unlike the log-based replication approach, DuckDB's Postgres scanner does a table scan. For large tables, it could be expensive to run this operation regularly. Also, the cost will be incurred by the primary database server.
Second, the DuckDB table scan approach is well suited for simple use cases such as building a cache over a primary database. Building a long-running replication pipeline will take more work. For instance, it will require maintaining a row pointer at which the next table scan should start. Also, it may only help in replicating tables that are append-only.
New Basin feature for listing information
by Bruno Calza
This week, we released a new version of the Basin CLI that adds two new commands: basin publication list
and basin publication deals
. With those, you can now list publications of a given address, and list deals information of a given publication. Check out the Readme for more details of their usage.
Announcing Open Data Hack winners
by Dan Buchholz
We've concluded the Open Data Hack and are excited to announce our top two team!
First place went to 3D Print Near Me. The team (Dominic Hackett & Karthikeyan Ramachandran) built a decentralized application that enabled a sharing economy for 3D printing services, bridging the gap between those who own 3D printers and those in need of their services. The platform includes a marketplace where users can acquire 3D printed items as NFTs, merging digital ownership seamlessly with physical creations. The Tableland prize focus was on using our protocol with shared access to data as well as special consideration for compute-over-data (the project leveraged Lilypad for this aspect). 3D Print Near me used Tableland as the data store for 3D printing orders and marketplace listings, including user information, order details, and listings—it was truly a unique approach and used Tableland quite well (e.g., here and here).
Second place went to Tableland Attestation Service ("TAS"), which brings attestation services to the Filecoin network. The team (Nick Lionis) migrated the EAS protocol together with Tableland to both Filecoin Calibration and Polygon Mumbai networks. The project leverages Tableland to seamlessly integrate with Filecoin and EVM-compatible blockchains to empower users to generate and verify attestations, providing cryptographically signed confirmation of the authenticity and accuracy of information. It also offers file attestations for various file types using Lighthouse storage as well as subscription-based and access control schemas, allowing for tailored access to attestations and versatile data management, including support for private or open AC schemas. Overall, it was tough to choose a winner, and TAS did a great job bringing EAS into the Filecoin realm while storing a wealth of data in Tableland (e.g., here).
Lastly, there were quite a few teams that came close to winning top prizes, but we wanted to still reward them for their efforts in our pool prize. Those teams include: Little Sisters, DataX, Lilylatte, DeForm, Daopia, SatStream, PastPost, PrecedentDAO, Intelliquiz, Recyclers DAO, RizzDocs, DogGo's, Paperstack, Libra, Web3 Blog Creator, MUSICLOG, and LoveChain.
Developments on the permissionless horizon
The work of the Farcaster team has caught our attention and ignited our enthusiasm. Their determined stride towards a permissionless framework, as seen through their initiative of permissionless onboarding and migration to OP Mainnet, speaks volumes about the open digital frontier we're heading towards. The phrase, which they may have coined, to be "sufficiently decentralized," strikes a chord with us, embodying a pragmatic yet forward-thinking approach to blockchain development. Our exploration into Warpcast has revealed a robust ecosystem rich with channels—we especially like the Dev channel—and more features that significantly enhance user engagement. The user-centric design stands out, showcasing one of the finest web3-native user experiences we've come across.
The evolution of this ecosystem represents a compelling case study for decentralized development and permissionless networks, showcasing a harmonious blend of innovation and accessibility. We hope you'll find us casting there from @tableland more and more moving forward.
Other updates this week
We’re mid-flight in ETHGlobal’s ETHOnline hackathon…there’s been a ton of activity from developers looking to build on Tableland! We focused our prizes on our new Studio web app & CLI as well as Tableland Basin. Accompanying these are some new guides that walk through things in more detail. The Studio docs are just about ready to go live on our docs site, and the Basin guide goes into detail on how to start up Postgres and Basin from scratch in a self-hosted setup.
End transmission…
Want to dive deeper, ask questions, or just nerd out with us? Jump into our weekly research office hours. Or, for hands-on technical support, you can join our weekly developer office hours.
And if you’d like to discuss any of these topics in more detail, comment on the issue over in GitHub!