Somewhat off topic, as someone who hasn't used PostgreSQL and only has experience with mysql/MariaDB... I've never liked writing queries with numbered parameters from an array with placeholders like $1 in this example. I find them much easier to read and debug when I pass them with string keys, basically:
`UPDATE t SET x=:x WHERE 1`
`{x:42}`
I found that the original node-mysql didn't even allow this, so I wrote my own parser on top of it. But I don't see this style of binding used in examples very often. Is it frowned upon for some reason?
A sometimes easier solution now in TS/JS is to use a simple template interpolation function (template tag) to inline the parameters and let the library autonumber them.
const x = 42
const id = 'example'
const { rows } = pgquery`Update t set x = ${x} where id = ${id}`
The pgquery function can convert that into the $1 and $2 that Postgres expects, but the source code is a little easier to read and has named parameters.
Of course, it potentially makes debugging the query from the database side a little harder because the query itself in its running form is still going to show the $1 and $2 placeholders so you'd have to count "holes" to figure which is which when trying to grep which source line is generating that query. I know that's why some frown on this template-based auto-placeholders because they want the code to better resemble the queries as they run in Postgres.
(Also yeah, it might be nice if Postgres also supported named placeholders like some of the other SQL databases do.)
PostgreSQL uses the format $1, $2 in the protocol, so I think it's just that nobody has bothered to implement named parameters in clients.
In another style, postgres.js uses calls such as sql`select * from t where id = ${variable}` (which is safe because it's a tagged template, not string interpolation).
Clorinde does this. It lets you write raw sql (with keyword arguments) and generate strongly typed Rust API (that deals with things like Option for nullable columns, etc)
This is basically how we do it in .NET. With Dapper it’s particularly neat sometimes because you can just pass an object that you were using anyway, and it will match parameter names to its properties case-insensitively.
I.e.
Query("select * from MyTable where Id = @Id", myEntity)
The idiom is to use anonymous objects which you can new up inline like “new { id, age = 5 }”, where id is an existing variable that will automatically lend its name. So it’s pretty concise.
The syntax is Sql Server native (which supports named params at the protocol level), but the Npgsql dat provider converts it to PG’s positional system automatically.
How do you guys solve the problem of conforming DB schema to TS interfaces (or vice versa depending on what you like)?
Do you manually keep them in-sync (that's what I'm leaning into as the most practical solution)? Do you introspect the DB schema? Or maybe use something like Drizzle which autogenerates sql migration to keep the db schema in-sync
Kysely is a query builder: you build queries by calling javascript functions, while with pg-typesafe you write SQL directly.
I've used kysely before creating pg-typesafe, and came to the conclusion that writing SQL directly is more convenient.
A query builder works well for simple cases (db.selectFrom("t").where("id","=","1") looks a lot like the equivalent SQL), however, for more complicated queries it all falls apart. I often had to look at the docs to find how to translate some predicate from SQL to the required idiom. Also, I don't think kysely can automatically infer the return type of PostgreSQL functions, while pg-typed does (it asks PostgreSQL for it).
I see they use the same global approach as pg-typed (asking for a ParameterDescription / RowDescription, which aren't usually exposed by the PG drivers), but there are interesting differences in the details. Also this made me realise that I could also type enums automatically.
I think plenty of ML languages have similar things. In F# SQL Type Providers have long been a mainstay (which makes using an ORM with F# almost meaningless IMO)
It’s really beautiful. You get type safety with SQL. If your code compiles, you guaranteed to have valid executable both F# code and SQL. Also you get to create composable queries.
Very nice! I am the author of Kanel* which generates types for each table, view, etc. I tend to use it together with Knex which gives me strong type safety from simple selects and updates and sporadic type safety from complex queries with joins etc.
The advantage to your approach (I guess) is increased type safety for complex queries, trading off the loss of "fundamental" types with, say, branded ID types? I guess the two approaches are quite complementary, perhaps I should try that.
Kanel is great! That's what I used before pg-typesafe, and I still use it for dynamic queries.
Regarding the nominal/branded types, the typegen is configurable, so instead of e.g. mapping oid 20 to a bigint, you could map the field id of table foo to Foo["id"] from kanel.
Is there a way to make this work with https://github.com/porsager/postgres ? It's very close to raw SQL but with safe substitutions. I use it with Zod, but it would be great to have the types derived automatically.
Unfortunately I haven't found a way to make it work.
It would be quite easy to extract the queries to compute the types, but TypeScript doesn't handle tagged template literals well enough to link the query passed to the sql`` template to the return type.
Seems very cool, if it works :) My suggestion here would be to provide a TS playground environment where I can inspect and poke around to see that the types really do work. I'd also love to see some more complex queries with joins, etc.
Per "how to handle dynamic queries", it's admittedly pretty different b/c we're an ORM (https://joist-orm.io/) that "fetches entities" instead of adhoc SQL queries, but our pattern for "variable number of filters/joins" looks like:
const { date, name, status } = args.filter;
await em.find(Employee, { date, name, employer: { status } });
Where the "shape" of the query is static, but `em.find` will drop/prune any filters/joins that are set to `undefined`.
So you get this nice "declarative / static structure" that gets "dynamically pruned to only what's applicable for the current query", instead of trying to jump through "how do I string together knex .orWhere clauses for this?" hoops.
I didn't look too much into sqlc-gen-typescript because the project looks abandoned (no commits in 2 years, many open PRs).
Regarding sqlc in general, it is focused on having the SQL queries in .sql files, while pg-typed is focused on having the queries inline (though I plan to add .sql file support). I like the latter approach better, as for small queries used in only one place, it is a little cumbersome to add them to a different file and find a name for them.
`UPDATE t SET x=:x WHERE 1` `{x:42}`
I found that the original node-mysql didn't even allow this, so I wrote my own parser on top of it. But I don't see this style of binding used in examples very often. Is it frowned upon for some reason?
Of course, it potentially makes debugging the query from the database side a little harder because the query itself in its running form is still going to show the $1 and $2 placeholders so you'd have to count "holes" to figure which is which when trying to grep which source line is generating that query. I know that's why some frown on this template-based auto-placeholders because they want the code to better resemble the queries as they run in Postgres.
(Also yeah, it might be nice if Postgres also supported named placeholders like some of the other SQL databases do.)
In another style, postgres.js uses calls such as sql`select * from t where id = ${variable}` (which is safe because it's a tagged template, not string interpolation).
https://github.com/halcyonnouveau/clorinde/?tab=readme-ov-fi...
I.e.
Query("select * from MyTable where Id = @Id", myEntity)
The idiom is to use anonymous objects which you can new up inline like “new { id, age = 5 }”, where id is an existing variable that will automatically lend its name. So it’s pretty concise.
The syntax is Sql Server native (which supports named params at the protocol level), but the Npgsql dat provider converts it to PG’s positional system automatically.
Do you manually keep them in-sync (that's what I'm leaning into as the most practical solution)? Do you introspect the DB schema? Or maybe use something like Drizzle which autogenerates sql migration to keep the db schema in-sync
I've used kysely before creating pg-typesafe, and came to the conclusion that writing SQL directly is more convenient.
A query builder works well for simple cases (db.selectFrom("t").where("id","=","1") looks a lot like the equivalent SQL), however, for more complicated queries it all falls apart. I often had to look at the docs to find how to translate some predicate from SQL to the required idiom. Also, I don't think kysely can automatically infer the return type of PostgreSQL functions, while pg-typed does (it asks PostgreSQL for it).
I see they use the same global approach as pg-typed (asking for a ParameterDescription / RowDescription, which aren't usually exposed by the PG drivers), but there are interesting differences in the details. Also this made me realise that I could also type enums automatically.
https://learn.microsoft.com/en-us/dotnet/fsharp/tutorials/ty...
SQLProvider is probably the most well known one: https://fsprojects.github.io/SQLProvider/
It’s really beautiful. You get type safety with SQL. If your code compiles, you guaranteed to have valid executable both F# code and SQL. Also you get to create composable queries.
And there are other, thinner/leaner type providers as well. My favourite Postgres one is: https://github.com/Zaid-Ajaj/Npgsql.FSharp
This simple sample executes query and read results as table then map the results
open Npgsql.FSharp
type User = { Id: int FirstName: string LastName: string }
let getAllUsers (connectionString: string) : User list = connectionString |> Sql.connect |> Sql.query "SELECT * FROM users" |> Sql.execute (fun read -> { Id = read.int "user_id" FirstName = read.text "first_name" LastName = read.text "last_name" })
The advantage to your approach (I guess) is increased type safety for complex queries, trading off the loss of "fundamental" types with, say, branded ID types? I guess the two approaches are quite complementary, perhaps I should try that.
* https://github.com/kristiandupont/kanel
Regarding the nominal/branded types, the typegen is configurable, so instead of e.g. mapping oid 20 to a bigint, you could map the field id of table foo to Foo["id"] from kanel.
see this example https://github.com/n-e/pg-typesafe?tab=readme-ov-file#type-j...
It would be quite easy to extract the queries to compute the types, but TypeScript doesn't handle tagged template literals well enough to link the query passed to the sql`` template to the return type.
1. https://github.com/manifold-systems/manifold/blob/master/man...
I'm curious if there are any good patterns for dealing with dynamic query building or composing queries?
const { date, name, status } = args.filter;
await em.find(Employee, { date, name, employer: { status } });
Where the "shape" of the query is static, but `em.find` will drop/prune any filters/joins that are set to `undefined`.
So you get this nice "declarative / static structure" that gets "dynamically pruned to only what's applicable for the current query", instead of trying to jump through "how do I string together knex .orWhere clauses for this?" hoops.
For now, I type these manually, which is acceptable for my usage as they are pretty rare compared to static queries.
Regarding sqlc in general, it is focused on having the SQL queries in .sql files, while pg-typed is focused on having the queries inline (though I plan to add .sql file support). I like the latter approach better, as for small queries used in only one place, it is a little cumbersome to add them to a different file and find a name for them.
The queries look a but more clumsy then but you won’t cause problems when spelling the column names.