In my programming language I have some sort of "borrowing" too (although it's named differently). But my language has no dynamic typing, only static typing is used and thus all checks are compile-time and have no runtime cost. Why bothering with dynamic typing and paying runtime costs for it?
> The goal is that most of your code can have the assurances of static typing, but you can still opt in to dynamically-typed glue code to handle repls, live code reloading, runtime code generation, malleable software etc.
Dynamic typing is neat, I actually prefer it to static typing. Most people who think they have a problem with dynamic typing actually have a problem with weak typing.
There is no consistent definition of the term "weak typing". Do you mean implicit coercion?
> Most people who think they have a problem with dynamic typing actually have a problem with weak typing.
Ironically, I would counter that, in my experience, most people who have a problem with static typing actually have a problem with verbose type systems, like Java's or C++'s — or Rust's. (Rust is at least gaining something for its verbosity.)
Type inference is a neat way to bridge the gap. OCaml, Haskell, and Swift (to name a few) all feature distinct type inferencing that give you the benefits of static types without as much syntactic overhead.
The standard complaint of pointless type errors that static type analysis would catch has nothing to do with weak typing, nor does the other one about unreliable listing of available ops in your editor by pressing `.` and looking at the autocomplete list. If you think the only thing people think is wrong about dynamic typing is JS `==` then you are swinging at a strawman from a decade ago.
Dynamic typing doesn't scale in large teams, it is however great in small projects, or if optional typing is supported, which took a long time to learn from languages like structured BASIC dialects.
It is no accident that all mainstream dynamic languages now have optional typing support, either in the language directly or via linters.
Oh yeah, I'm massively in favour of gradual typing. Python's choice to not actually enforce type hints is perhaps the most moronic language design I've ever seen.
Sorta? Python has fairly strong types but it's no fun debugging a `None has no attribute foo` error deep inside some library function with a call site 1000 LoC away from the actual place where the erroneous None originally arose, due to a typo.
It's not just Python too, I've hit the same issue in Common Lisp.
Yes one can run contracts and unit tests and static analysis, but what's a type checker anyway other than a very strict static analysis tool?
The term unityped is used as well, and at typing level this also makes sense: you have one type called object, you put that object alongside the value object ("tag"), and then at runtime all operations on that object check if its type object provides the operation the code is trying to apply on it (or maybe each value object directly knows the operations it supports). I think I prefer this term.
"syntactic type" is a weird term to me, though. Is that in common use?
> The point of types is to prove the absence of errors
Maybe for you. Originally static typing was to make the job of the compiler easier. Dynamic typing was seen as a feature that allows for faster prototyping.
And no, dynamic typing does not mean untyped. It just means type errors are checked at runtime instead of compile time.
You can have strongly typed dynamic languages. Common Lisp is a very good example.
Weak typing is a design mistake. Dynamic typing has its place as it allows you to have types that are impossible to express in most static type systems while avoiding the bureaucratic overhead of having to prematurely declare your types.
The best languages allow for gradual typing. Prototype first then add types once the general shape of your program becomes clear.
Interestingly enough, I have never needed them there. Granted, I have written a few orders of magnitude less Haskell than I have C++. Still, the difference is worth interrogating (when I'm less sleep deprived).
Can someone help confirm whether I understand correctly the semantics difference between the final-line eval of
x^
vs.
x*
?
It seems like either one evaluates the contents of the `box`, and would only make a difference if you tried to use `x` afterwards? Essentially if you final-line eval `x^` and then decide you want to continue that snippet, you can't use `x` anymore because it's been moved. Awkwardly, it also hasn't been assigned so I'm not sure the box is accessible anymore?
No, the real actual problem is with invisibility aka: the absence of readability:
In a "dynamic typing" program the interpreter knows what `a` is but YOU not.
In very strong sense. You can imagine that `a` is a `int` because, well, you write the program, right? But in fact, is only probabilistic assumption.
Some day, `a` will be a program that delete the files of your computer.
> Most people who think they have a problem with dynamic typing actually have a problem with weak typing.
Ironically, I would counter that, in my experience, most people who have a problem with static typing actually have a problem with verbose type systems, like Java's or C++'s — or Rust's. (Rust is at least gaining something for its verbosity.)
Type inference is a neat way to bridge the gap. OCaml, Haskell, and Swift (to name a few) all feature distinct type inferencing that give you the benefits of static types without as much syntactic overhead.
It is no accident that all mainstream dynamic languages now have optional typing support, either in the language directly or via linters.
It's not just Python too, I've hit the same issue in Common Lisp.
Yes one can run contracts and unit tests and static analysis, but what's a type checker anyway other than a very strict static analysis tool?
The correct term for languages that don’t have syntactic types is “untyped”.
> Most people who think they have a problem with dynamic typing actually have a problem with weak typing.
All people who say things like this have never studied computer science.
"syntactic type" is a weird term to me, though. Is that in common use?
The point of types is to prove the absence of errors. Dynamic typing just has these errors well-structured and early, but they're still errors.
Maybe for you. Originally static typing was to make the job of the compiler easier. Dynamic typing was seen as a feature that allows for faster prototyping.
And no, dynamic typing does not mean untyped. It just means type errors are checked at runtime instead of compile time.
You can have strongly typed dynamic languages. Common Lisp is a very good example.
Weak typing is a design mistake. Dynamic typing has its place as it allows you to have types that are impossible to express in most static type systems while avoiding the bureaucratic overhead of having to prematurely declare your types.
The best languages allow for gradual typing. Prototype first then add types once the general shape of your program becomes clear.
If you want to apply the same operation on all of them, then they share some API commonality -- therefore you can use polymorphism or type erasure.
If they don't, you still need to know what types they are -- therefore you can use `std::variant`.
If they really are unrelated, why are you storing them together in the same container? Even then, it's trivial in C++: `std::vector<std::any>`.
It seems like either one evaluates the contents of the `box`, and would only make a difference if you tried to use `x` afterwards? Essentially if you final-line eval `x^` and then decide you want to continue that snippet, you can't use `x` anymore because it's been moved. Awkwardly, it also hasn't been assigned so I'm not sure the box is accessible anymore?
More or less. x^ moves the whole box whereas x* copies the contents of the box.
> Awkwardly, it also hasn't been assigned so I'm not sure the box is accessible anymore?
Yes, if you move something and don't assign it then it gets dropped, same as rust.