My peers and I work on a language centered around "constructive data modeling" (first time I hear it called that). We implement integers, and indeed, things like non empty lists using algebraic data types, for example. You can both have a theory of values that doesn't rely on trapdoors like "int32" or "string", as well as encode invariants, as this article covers.
As I understand it, the primary purpose of newtypes is actually just to work around typeclass issues like in the examples mentioned at the end of the article. They are specifically designed to be zero cost, because you want to not pay when you work around the type class instance already being taken for the type you want to make an instance for. When you make an abstract data type by not exporting the data constructors, that can be done with or without newtype.
The alternative to newtypes is probably to go the same route as OCaml and have people explicitly bring their own instances for typeclasses, instead of allowing each type only one instance?
I think OCaml calls these things modules or so. But the concepts are similar. For most cases, when there's one obvious instance that you want, having Haskell pick the instance is less of a hassle.
IME this is exactly backwards: type safety is mostly about names, everything else is a nice-to-have. Yes, you can bypass your name checks if you want to, but you can bypass any type check if you want to. Most relevant type relationships in most programming are business relationships that would be prohibitively expensive to express in a full formalism if that was even possible. But putting names on them is cheap, easy, and effective. The biggest win from typed languages comes from using these basic techniques.
Hmm, IME the preferred type systems are structural - a function shouldn't care what the name is of the struct passed to it, it should just work if it has the correct fields.
I think that's backwards - ultimately everything on a computer is just bytes, so if you push that philosophy to the limit then you would write untyped functions and they can "just work" on any input (just not necessarily giving results that are sensible or useful if the input is wrong). The point of a type system is to help you avoid writing semantically wrong code, to bring errors forward, and actually the most important and valuable use case is distinguishing values that are structurally identical but semantically different (e.g. customer ID vs product ID, x coordinate vs y coordinate, immutable list vs read view of mutable list, sorted vs unsorted...).
I think the structural type approach leans heavily into the "computation is just data and its transformations", so it makes sense for it to treat data as the most important thing. You end up thinking less about classification and more about the transformations.
I'm not saying the nominal approach to types is wrong or bad, I just find my way of thinking is better suited for structural systems. I'm thinking less about the semantics around product_id vs user_id and more about what transforms are relevant - the semantics show up in the domain layer.
Take a vec3 for example, in a structural system you could apply a function designed for a vec2 on it, which has practical applications.
> I think the structural type approach leans heavily into the "computation is just data and its transformations"
But it's never "just data". My password is different in many ways than my username. Don't you ever log/print it by accident! So even if structurally the same, we MUST treat it different. Hence any approach that always only looks at things structurally is deeply flawed in the context of safe software development.
Structural type systems mostly don’t support encapsulation (private members that store things like account numbers) without some sort of weird add on, while nominal type systems support encapsulation directly (because the name hides structure). The canonical example is a cowboy and picture that both have a draw method.
TS doesn’t really. TS simply treats private fields as public ones when it comes to structural type checks. TS is unsound anyways, so not providing hard guarantees about field access safety is right up its alley. More to the point, if you specify a class type with private fields as a requirement, whatever you plug into that requirement has to have those private fields, they are part of the type’s public signature.
To get where structural type systems fall down, think about a bad case is when dealing with native state and you have a private long field with a pointer hiding in it used in native calls. Any “type” that provides that long will fit the type, leading to seg faults. A nominal type system allows you to make assurances behind the class name.
class Foo {
public bar = 1;
private _value = 'hello';
static doSomething(f: Foo) {
console.log(f._value);
}
}
class MockFoo { public bar = 1; }
let mock = new MockFoo();
Foo.doSomething(mock); // Fails
Which is why you'd generally use interfaces, either declared or inline.
In the pointer example, if the long field is private then it's not part of the public interface and you shouldn't run into that issue no?
You mean like if you have two types which are identical but you want your type system to treat them as distinct? To me that's a data modelling issue rather than something wrong with the type system, but I understand how it can sometimes be unavoidable and you need to work around it.
I think it also makes more sense in immutable functional languages like clojure. Oddly enough I like it in Go too, despite being very different from clojure.
In Rust I find myself gaining a good bit of type safety without losing ergonomics by wrapping types in a newtype then implementing Deref for them. At first it might seem like a waste, but it prevents accidentally passing the wrong type of thing to a function (e.g. a user UUID as a post UUID).
These are possibly situations where I’d resort to a panic on the extra branch rather than complicate the return type.
Providing a proof of program correctness is pretty challenging even in languages that support it. In most cases careful checking of invariants at runtime (where not possible at compile time) and crashing loudly and early is sufficient for reliable-enough software.
The author seems concerned about compile-time range checking: did you handle the full range of inputs?
Range checking can be very annoying to deal with if you take it too seriously. This comes up when writing a property testing framework. It's easy to generate test data that will cause out of memory errors - just pass in maximum-length strings everywhere. Your code accepts any string, right? That's what type signature says!
In practice, setting compile-time limits on string sizes for the inputs to every internal function would be unreasonable. When using dynamically allocated memory, the maximum input size is really a system property: how much memory does the system have? Limits on input sizes need to be set at system boundaries.
Perhaps it's because I'm not a haskeller but I'm not sure if I'm sold on encoding this into the type system. In go (and other languages for example), you would simply use a struct with a hidden Int, and receiver methods for construction/modification/access. I'm not sure I see the benefit of the type ceremony around it.
As I understand it, the primary purpose of newtypes is actually just to work around typeclass issues like in the examples mentioned at the end of the article. They are specifically designed to be zero cost, because you want to not pay when you work around the type class instance already being taken for the type you want to make an instance for. When you make an abstract data type by not exporting the data constructors, that can be done with or without newtype.
I think OCaml calls these things modules or so. But the concepts are similar. For most cases, when there's one obvious instance that you want, having Haskell pick the instance is less of a hassle.
I'm not saying the nominal approach to types is wrong or bad, I just find my way of thinking is better suited for structural systems. I'm thinking less about the semantics around product_id vs user_id and more about what transforms are relevant - the semantics show up in the domain layer.
Take a vec3 for example, in a structural system you could apply a function designed for a vec2 on it, which has practical applications.
But it's never "just data". My password is different in many ways than my username. Don't you ever log/print it by accident! So even if structurally the same, we MUST treat it different. Hence any approach that always only looks at things structurally is deeply flawed in the context of safe software development.
To get where structural type systems fall down, think about a bad case is when dealing with native state and you have a private long field with a pointer hiding in it used in native calls. Any “type” that provides that long will fit the type, leading to seg faults. A nominal type system allows you to make assurances behind the class name.
Anyways, this was a big deal in the late 90s, eg see opaque types https://en.wikipedia.org/wiki/Opaque_data_type.
In the pointer example, if the long field is private then it's not part of the public interface and you shouldn't run into that issue no?
I think it also makes more sense in immutable functional languages like clojure. Oddly enough I like it in Go too, despite being very different from clojure.
It seems ok in upcoming languages with polymorphic sum types (eg Roc “tags”) though?
Correct fields by...name? By structure? I'm trying to understand.
Providing a proof of program correctness is pretty challenging even in languages that support it. In most cases careful checking of invariants at runtime (where not possible at compile time) and crashing loudly and early is sufficient for reliable-enough software.
In other words the full range of Int?
Is newtype still bad?
In other words how much of this criticism has to do with newtype not providing sub-ranging for enumerable types?
It seems that it could be extended to do that.
Range checking can be very annoying to deal with if you take it too seriously. This comes up when writing a property testing framework. It's easy to generate test data that will cause out of memory errors - just pass in maximum-length strings everywhere. Your code accepts any string, right? That's what type signature says!
In practice, setting compile-time limits on string sizes for the inputs to every internal function would be unreasonable. When using dynamically allocated memory, the maximum input size is really a system property: how much memory does the system have? Limits on input sizes need to be set at system boundaries.
In such languages that's the equivalent of a newtype in Haskell.