I saw a talk by someone from Google about their experiences using Rust in the Android team. Two points stuck out: they migrated many projects from Python, so performance can't have been that much of a concern, and in their surveys the features people liked most were basics like pattern matching and ADTs. My conclusion is that for a lot of tasks the benefit from Rust came from ML cicra 1990, not lifetimes etc. I feel if OCaml had got its act together around about 2010 with multicore and a few other annoyances[1] it could have been Rust. Unfortunately it fell into the gap between what academia could justify working on and what industry was willing to do.
[1]: Practically speaking, the 31-bit Ints are annoying if you're trying to do any bit bashing, but aesthetically the double semicolons are an abomination and irk me far more.
StopDisinfo910 3 hours ago [-]
> I feel if OCaml had got its act together around about 2010 with multicore and a few other annoyances[1]
OCaml had its act together. It was significantly nicer than Python when I used it professionally in 2010. Just look at what JaneStreet achieved with it.
The main impediment to OCaml was always that it was not American nor mainly developed from the US.
People like to believe there is some technical merit to language popularity but the reality it’s all fashion based. Rust is popular because they did a ton of outreach. They used to pay someone full time to mostly toot their horn.
nine_k 17 hours ago [-]
I'd say that Google strives to have a reasonably short list of languages approved for production-touching code. Rust can replace / complement C++, while OCaml cannot (it could replace Go instead... fat chance!). So I suspect that the team picked Rust because it was the only blessed language with ADTs, not because they won't like something with faster compile times.
No way OCaml could have stolen the Rust's thunder: we have a number of very decent and performant GC-based languages, from Go to Haskell; we only had one bare-metal-worthy expressive language in 2010, C++, and it was pretty terrible (still is, but before C++11 and C++17 it was even more terrible).
GhosT078 11 hours ago [-]
In 2010, Ada 2005 was the most bare-metal-worthy expressive language. Now that would be Ada 2022.
nine_k 9 hours ago [-]
While at it: what was / is holding Ada back? I haven't seen a single open-source project built in Ada, nor did I hear about any closed-source corporate project that uses Ada's superpowers. (Most likely these exist! But I did not see any available, or at least well-publicized.)
People agree to go to great lengths to use a tool that has some kind of superpower, despite syntactic weirdness or tooling deficiencies. People study and use LISP descendants like Clojure, APL descendants like K, "academic" languages like Haskell and OCaml, they write hobby projects in niche languages like Nim or Odin, they even use even C++ templates.
Why is Ada so under-represented? It must have a very mature ecosystem. I suspect that it's just closed-source mostly, and the parties involved don't see much value in opening up. If so, Ada is never going to make it big, and will slowly retreat under the pressure of better-known open alternatives, even in entrenched areas like aerospace.
pjmlp 6 hours ago [-]
There are enough closed source corporate projects to keep 7 vendors in business, selling compilers, in an age developers hardly pay for their tools.
Ada was too hardware demanding for the kind of computers people could afford at home, we could already do our Ada-like programming with Modula-2 and Object Pascal dialect hence how Ada lost home computing, and FreePascal/Delphi would be much more used today, had it not been for Borland getting too gready.
On big iron systems, espcially among UNIX vendors they always wanted extra bucks for Ada.
When Sun started the trend of UNIX vendors to charge for the developer tools as an additional SKU, Ada wasn't part of the package, rather an additional license on top, so when you already pay for C and C++ compilers, why would someone pay a few thousand (select currency) more if not required to do so, only because of feeling good writing safer software, back in the days no one cared about development cost of fixing security bugs.
michaelcampbell 14 minutes ago [-]
> While at it: what was / is holding Ada back?
I have a pet theory that it shares the same thing as any heavily typed language; it's difficult. And people aren't willing to accept that when you get it to compile at all, it'll probably work fine.
So many developers (and many more middle management) are not willing to trade the longer term stability/lack of runtime errors for the quick-to-production ability of other languages.
excellent summary of the commercial side, and i personally think the open source tooling with alire got a LOT better. it's definitely worth checking out
hardwaregeek 17 hours ago [-]
Wouldn’t Kotlin be a more reasonable choice in that case? It has ADTs and a lot of the same niceties of Rust.
michaelcampbell 12 minutes ago [-]
Kotlin never had the "shiny new" aspect to it that Rust did; everyone gave it a bit of side-eye for coming from a company that wrote the IDE to support it well.
Artamus 16 hours ago [-]
I'm inclined to think that the Python -> Rust was only for some odds and ends. I know the biggest recipient of Rust trainings was the Android platform team at first, which I think also used a lot of C++.
Kotlin is definitely available at Google, but when talking about sym types et al it's not nearly as nice to use as Rust / OCaml.
sureglymop 15 hours ago [-]
Yes. It can also be compiled to native. I just think it was held back too much by the java/jvm backwards compatibility but then again that's probably also the justification for its existence.
I definitely find it (and jetpack compose) make developing android apps a much better experience than it used to be.
What I like a lot about Kotlin are its well written documentation and the trailing lambdas feature. That is definitely directly OCaml inspired (though I also recently saw it in a newer language, the "use" feature in Gleam). But in Kotlin it looks nicer imo. Allows declarative code to look pretty much like json which makes it more beginner friendly than the use syntax.
But Kotlin doesn't really significantly stand out among Java, C#, Swift, Go, etc. And so it is kind of doomed to be a somewhat domain specific language imo.
brabel 3 hours ago [-]
> ... the trailing lambdas feature. That is definitely directly OCaml inspired...
Kotlin has a very similar syntax to Groovy, which already had that feature (it looks identical in Groovy and Kotlin)... and I believe Groovy itself took that from Ruby, I believe (Groovy tried to add most convenient features from Python and Ruby). Perhaps that is what came from OCaml?? No idea, but I'd say the chance Kotlin copied Groovy is much higher as JB was using Java and Groovy before Kotlin existed.
DerArzt 14 hours ago [-]
I wouldn't say it's doomed. For projects in large organizations that have a large amount of java already, it provides better ergonomics while allowing interop with the existing company ecosystem.
6 hours ago [-]
pjmlp 6 hours ago [-]
Java is castrated on purpose on Android as means to sell Kotlin.
If that wasn't the case, Google would support Java latest with all features, alongside Kotlin, and let the best win.
See how much market update Kotlin has outside Android, when it isn't being pushed and needs to compete against Java vLatest on equal terms.
StopDisinfo910 3 hours ago [-]
Blame Oracle. If they had been more forward looking and a bit less greedy, Java vLatest would be the default language on Android.
pjmlp 3 hours ago [-]
Not at all, I stand by Oracle on their lawsuit.
Android is Google's J++, which Sun sued and won.
Kotlin is Google's C#.
Plus everyone keeps forgetting Kotlin is a JVM based language, Android Studio and Gradle are implemented in JVM languages, JVM are implemented in a mix of C, C++ and Java (zero Kotlin), Android still uses Java, only that Google only takes out of OpenJDK what they feel like, and currentl that is Java 17 LTS, most of the work on OpenJDK was done by Oracle employees.
StopDisinfo910 3 hours ago [-]
> Not at all, I stand by Oracle on their lawsuit.
I think it will be very hard for us to find anything in common to agree on then.
Anyway, it’s pretty clear Google is pushing Kotlin because they don’t want to have anything to do with Oracle which has not been cleared by the verdict of their last trial. The situation has nothing to do with anything technical.
Blaming them for pushing Kotlin when the alternative you offer is them using a language they have already been sued for their use of seems extremely misguided to me.
pjmlp 2 hours ago [-]
We don't have to agree in anything, I wasn't asking for any agreement to start with.
I call them dishonest by comparing outdated Java 7 subset with Kotlin, when back in 2017 the latest version was Java 9, and in 2025 it is Java 24, and yet the documentation keeps using Java 8 for most examples on Java versus Kotlin.
How come Google doesn't want to have anything with Oracle, when it is impossible to build an Android distribution without a JVM, again people like yourself keep forgeting OpenJDK is mostly a product from Oracle employees (about 80%) with remaing efforts distributed across Red-Hat(IBM), IBM, Azul, Microsoft and JetBrains (I wonder what those do on Android), Kotlin doesn't build for Android without a JVM implementation, Gradle requires a JVM implementation, Android Studio requires a JVM implementation, Maven Central has JVM libraries,....
If Google doesn't want anything to do with Oracle why aren't they using Dart, created by themselves, instead of a language that is fully dependent on Oracle's kigdom for its own very existence?
StopDisinfo910 1 hours ago [-]
> How come Google doesn't want to have anything with Oracle
They clearly don’t want to add anything which couldn’t be reasonably covered by the result of the previous trial.
The list you give was all already there then. Moving to a more recent version of Java wouldn’t be.
> OpenJDK is mostly a product from Oracle employees (about 80%)
Sun employees, not Oracle employees. Using Sun technology was fine, using Oracle technology is something else entirely.
pjmlp 21 minutes ago [-]
I advise you to educate yourself who works and owns OpenJDK copyrights.
"Once you have contributed several changes (usually two) you can become an Author. An author has the right to create patches but cannot push them. To push a patch, you need a Sponsor. Gaining a sponsorship is usually achieved through the discussions you had on the mailing lists.
In order to become an Author, you also need to sign the Oracle Contribution Agreement (OCA)."
"The OpenJDK Lead is an OpenJDK Member, appointed by Oracle, who directs the major efforts of the Community, which are new implementations of the Java SE Platform known as JDK Release Projects."
"Of the 26,447 JIRA issues marked as fixed in Java 11 through Java 22 at the time of their GA, 18,842 were completed by Oracle employees while 7,605 were contributed by individual developers and developers working for other organizations. Going through the issues and collating the organization data from assignees results in the following chart of organizations sponsoring the development of contributions in Java:"
To spare you the math, 77% were done by Oracle employees.
Now please show us how Kotlin compiles for Android without using Java.
Doesn't look like Google got rid of Oracle to me, more like they didn't even considered Dart, nor Go could stand a chance against the Java ecosystem among Android developers.
StopDisinfo910 7 minutes ago [-]
You wrote than 80% of OpenJDK was written by Oracle employees. That’s patently untrue. Most of OpenJDK was written by Sun employees before Oracle bought Sun.
Your link doesn’t change any of that nor your clearly condescending comment before. You are perfectly aware of the fact by the way and you know exactly what I meant so I don’t really understand the game you are playing.
Oracle can claim Sun contribution as their own as much as they want. It doesn’t change the fact that you would have to be insane to touch anything they do now that it’s Oracle property.
pjmlp 51 seconds ago [-]
What is pathetic is the quality of your answers, who do you think has written the code between Java 6 and Java 24?
I am playing the FACTS game.
rendaw 8 hours ago [-]
I think GP means "algebraic data types" not "abstract data types", probably specifically tagged unions. Both Kotlin and Java can (now) do something similar with sealed classes but it's quite less ergonomic.
actionfromafar 16 hours ago [-]
Garbage collector in Kotlin makes it a no go for C or C++ displacement.
pjmlp 4 hours ago [-]
As someone that has left pure C++ applications in 2006, and has mostly written mixed language projects since 1999, usually the displacement is more religious than anything else.
In many use cases even if the performance is within the project delivery deadlines there will worthless discussions about performance benchmarks completly irrelevant to the task at hand.
And ironically many of the same folks are using Electron based apps for their workflows.
swiftcoder 6 hours ago [-]
Potentially, but Kotlin is even more recent that Rust, and didn't get blessed internally at Google till somewhat later
14 hours ago [-]
IshKebab 4 hours ago [-]
I agree. If OCaml had solved some of its bigger paper cuts it could have been a real player. Compilation time is much better than Rust too:
* OPAM is quite buggy and extremely confusing.
* Windows support is very bad. If you ever tried to use Perl on Windows back in the day... it's worse than that.
* Documentation is terse to the point of uselessness.
* The syntax style is quite hard to mentally parse and also not very recoverable. If you miss some word or character the error can be "the second half of the file has a syntax error". Not very fun. Rust's more traditional syntax is much easier to deal with.
Rust basically has none of those issues. Really the only advantage I can see with OCaml today is compile time, which is important, but it's definitely not important enough to make me want to use OCaml.
jll29 2 hours ago [-]
I'd say the Modula-2 inspired module system is a very valuable asset compared to today's Rust.
The only contact with OCaml I had was that I wrote a bug report to a university professor because I wanted his tool to process one of my files, but the file was larger than OCaml's int type could handle. That itself wasn't the problem - he wrote it wasn't straight forward to fix it. (This is a bug of the type "couldn't have happened in Common LISP". But I guess even in C one could replace int by FILE_SIYE_TYPE and #define it as unsigned size_t, for instance).
pjmlp 2 hours ago [-]
It is more the other way around ML predates Modula-2, and the module system like ideas were already present in Mesa and UCSD Pascal. :)
gerdesj 13 hours ago [-]
"I feel if OCaml had got its act together ..."
The great thing is we have choice. We have a huge number of ways to express ideas and ... do them!
I might draw a parallel with the number of spoken languages extent in the UK (only ~65M people). You are probably familiar with English. There are rather a lot more languages here. Irish, Scottish, Welsh - these are the thriving Brythonic languages (and they probably have some sub-types). Cornish formally died out in the sixties (the last two sisters that spoke it natively, passed away) but it has been revived by some locals and given that living people who could communicate with relos with first hand experience, I think we can count that a language that is largely saved. Cumbric ... counting still used by shepherds - something like: yan, tan, tithera toe.
I am looking at OCAML because I'm the next generation to worry about genealogy in my family and my uncle has picked Geneweb to store the data, taking over from TMG - a Windows app. His database contains roughly 140,000 individuals. Geneweb is programmed in OCAML.
If you think that programming languages are complicated ... have a go at genealogy. You will soon discover something called GEDCOM and then you will weep!
DrewADesign 10 hours ago [-]
For personal projects? Sure. In nearly any development organization larger than one person, unilaterally deciding to use OCAML instead of what everybody else uses would go over about as well as unilaterally deciding to use Aramaic at meetings.
beezlewax 3 hours ago [-]
It's weird to see someone from the UK champion the Irish language as a choice as if they didn't try to systematically wipe it from the face of the earth for quite a long period of time.
Choice is good of course so do keep up the good work.
pjmlp 6 hours ago [-]
That is why if I feel like doing ML style programing I rather reach out for Kotlin, Scala or F# than Rust, and even then Java and C# have gotten enough inspiration that I can also feel at home while using them.
I am no strage to ML type systems, my first one was Caml Light, OCaml was still known as Objective Caml, and Mirada was still something being discussed on programming language lectures on my university.
From what I see, I also kind of find the same, too many people rush out for Rust thinking that ML type systems is something new introduced by Rust, without having the background where all comes from.
shpongled 10 hours ago [-]
As someone who loves SML/OCaml and has written primarily Rust over the past ~10 years, I totally agree - I use it as a modern and ergonomic ML with best-in-class tooling, libraries, and performance. Lifetimes are cool, and I use them when needed, but they aren't the reason I use Rust at all. I would use Rust with a GC instead of lifetimes too.
hawk_ 6 hours ago [-]
How do you use Rust without lifetimes?
legobmw99 43 minutes ago [-]
Either a lot of clones or a lot of reference counted pointers. Especially if your point of comparison is a GC language, this is much less of a crime than some people think
unstruktured 17 hours ago [-]
There is absolutely no reason to use double semicolons in practice. The only place you really should see it is when using the repl.
sigzero 16 hours ago [-]
Yeah, it makes me think he doesn't understand them in OCaml.
acjohnson55 10 hours ago [-]
I worked in OCaml for a year and I couldn't tell you by memory what the difference was. I remember being very annoyed by OCaml's many language quirks.
yodsanklai 15 hours ago [-]
> aesthetically the double semicolons are an abomination and irk me far more.
I think they have been optional for like 20 years, except in the top-level interactive environment to force execution.
That being said, I still don't get why people are so much upset with the syntax. You'll integrate it after a week writing OCaml code.
swiftcoder 6 hours ago [-]
Erlang faces a similar uphill battle when it comes to syntax - there are three different punctuation marks used as terminators depending on context, and you have to keep in your head the rules for all 3. As someone who has written quite a bit of Erlang, but infrequently, it's always a battle.
And I think a big part of the reason that Elixir has done so well (Elixir pretty much starting out as Erlang-but-with-Ruby-syntax)
whimsicalism 14 hours ago [-]
I spent more than a week writing ocaml and still found the syntax pretty annoying. ReasonML would have been nice if the Ocaml community actually cared, but they are a bit insular.
3 hours ago [-]
yawaramin 9 hours ago [-]
Reason syntax is fully supported by the OCaml ecosystem and has been for many years.
munificent 15 hours ago [-]
> I feel if OCaml had got its act together around about 2010 with multicore and a few other annoyances[1] it could have been Rust.
Arguably, that could have been Scala and for a while it seemed like it would be Scala but then it kind of just... didn't.
I suspect some of that was that the programming style of some high profile Scala packages really alienated people by pushing the type system and operator overloading much farther than necessary.
whimsicalism 14 hours ago [-]
Scala was always going to be hamstrung by the fact that it's a JVM language and yes, the crazy stuff people did with the language didn't help.
owlstuffing 9 hours ago [-]
I agree w that, but I think Scala has deeper problems.
It tries to be a better Java and a better OCaml at the same time. This split personality led to Scala’s many dialects, which made it notorious for being difficult to read and reason about, particularly as a mainstream language contender.
Above all, Scala is considered a functional language with imperative OOP qualities. And it more or less fits that description. But like it or not primarily functional languages don’t have a strong reputation for building large maintainable enterprise software.
That’s the quiet part no one says out loud.
It’s like how in academic circles Lisp is considered the most pure and most powerful of programming languages, which may be true. At the same time most real-world decision makers see it as unsuitable as a mainstream language. If it were otherwise, we’d have seen a Lisp contend with imperative langs such as Java, C#, TypeScript, etc.
I’ve always attributed this disconnect to the fact that people naturally model the world around them as objects with state — people don’t think functionally.
vips7L 8 hours ago [-]
To me Scala is first and foremost a research language. It’s even how it’s developed.
ubercore 4 hours ago [-]
I've only forayed into rust a bit, but I agree. I would happily take a language like Rust that sacrifices some speed for simpler semantics around ownership/lifetimes.
Just Arc, clone and panic your way to success! :)
omcnoe 6 hours ago [-]
The issue regarding academia is that functional programming is treated as an afterthought/sideshow that is mainly of interest for research. Almost no-one is teaching FP concepts to undergrads.
uncircle 6 hours ago [-]
Before 2010-something, the very popular meme in software engineering was that functional programming is really hard to understand, and really not suited for anything outside of academia. [1] Now that I've been programming in an immutable functional language for almost a decade (Elixir), I'm certain the meme was mostly born out of unfamiliarity [2] than actual complexity; it's really not that much different than imperative, just requires a different approach and understanding of the trade-offs. Writing a distributed system (say, a web app backend) in an imperative, mutable language in this day and age is increasingly a laughable proposition in my view. Use the right tool for the problem at hand.
Many academically-trained developers never got exposed to FP in school, and to this day you can still hear, albeit in much lesser numbers thanks to the popularity of Elixir/Clojure/etc., the meme of FP being "hard" perpetuated.
---
1: I would go so far as to blame Haskell for the misplaced belief that FP means overcomplicated type theory when all you want is a for loop and a mutable data-structure.
2: I played with OCaml 10+ years ago, and couldn't make head or tails of it. I tried again recently, and it just felt familiar and quite obvious.
mrkeen 6 hours ago [-]
> the very popular meme in software engineering was that functional programming is really hard
> I'm certain the meme was mostly born out of unfamiliarity
> I would go so far as to blame Haskell for the misplaced belief that FP means overcomplicated type theory
pjmlp 4 hours ago [-]
I can assure you that wasn't the case on my degree, if anything almost every lecture had its own programming language.
Maybe I got lucky being in one of the most relevant universities in Portugal, however I can tell that others in the country strive for similar quality for their graduates, even almost 40 years later.
rtpg 9 hours ago [-]
I think people like ADTs and pattern matching that Rust gives, but really the way that Rust becomes even more pleasant is that you have _so many_ trait methods on standard library objects that offer succinct answers to common patterns.
Haskell of course has some of this, but immutability means that Haskell doesn't have to have answers for lots of things. And you want pattern matching as your basic building block, but at the end of the day most of your code won't have pattern matching and will instead rely on higher level patterns (that can build off of ADTs providing some degree of certainty on totality etc)
the__alchemist 16 hours ago [-]
I'm with you. I think some of the nicest parts of rust have nothing to do with memory safety; they're ways to structure your program as you mention.
garbthetill 17 hours ago [-]
doesnt rust still have the advantage of having no gc? I dont like writing rust, but the selling point of being able to write performative code with memory safety guarantees has always stuck with me
noelwelsh 17 hours ago [-]
I think "no gc but memory safe" is what originally got people excited about Rust. It's a genuinely new capability in production ready languages. However, I think Rust is used in many contexts where a GC is just fine and working with lifetimes makes many programs more painful to write. I think for many programs the approach taken by Oxidized OCaml[1] or Scala[2] gives 80% of the benefit while being a lot more ergonomic to work with.
When I see Rust topping the "most loved language" on Stack Overflow etc. what I think is really happening is that people are using a "modern" language for the first time. I consistently see people gushing about, e.g., pattern matching in Rust. I agree pattern matching is awesome, but it is also not at all novel if you are a PL nerd. It's just that most commercial languages are crap from a PL nerd point of view.
So I think "no gc but memory safe" is what got people to look at Rust, but it's 1990s ML (ADTs, pattern matching, etc.) that keeps them there.
> So I think "no gc but memory safe" is what got people to look at Rust, but it's 1990s ML (ADTs, pattern matching, etc.) that keeps them there.
Yeah; this is my experience. I've been working in C professionally lately after writing rust fulltime for a few years. I don't really miss the borrow checker. But I really miss ADTs (eg Result<>, Option, etc), generic containers (Vec<T>), tuples, match expressions and the tooling (Cargo).
You can work around a lot of these problems in C with grit and frustration, but rust just gives you good answers out of the box.
GeekyBear 16 hours ago [-]
> I think "no gc but memory safe" is what originally got people excited about Rust.
I think it was more about "the performance of C, but with memory safety and data race safety".
voidhorse 10 hours ago [-]
Spot on. It's also fascinating to watch people have their minds blown in 2020+ by basic features that have been around since the nineties. It's kind of sad, actually. The industry would be in such a better place than it is today if so many programmers weren't allergic to all things academic and "theoretical" and were more curious and technical than they were conceited. It's baffling that computing, literally a subject area in which the theory quite literally is the practice, is so full of people who refuse to engage with theoretical work and research.
pjmlp 2 hours ago [-]
And eventually nothing of it will matter because our AI overlords will eventually translate natural language, maybe with some added help from formalisms, into any kind of application.
vips7L 17 hours ago [-]
You can write safe and performative code with a garbage collector. They're not mutually exclusive. It just depends on your latency and throughput requirements.
sieabahlpark 17 hours ago [-]
[dead]
0cf8612b2e1e 16 hours ago [-]
Considering how many applications are running in JS/Python execution speed or GC is a low concern for many programs. Ergonomics (community, compiler guarantees, distribution, memory pressure, talent availability, whatever) seem more meaningful.
gf000 7 hours ago [-]
I get your point, but JS is an order of magnitude faster than Python, they are not in the same league.
A lot of effort went into making it efficient thanks to the web, while python sorta has its hands tied back due to exposing internals that can be readily used from C.
pjmlp 2 hours ago [-]
PyPy could get some community love, but I guess it will never happen, and on the GPU side, Python is basically a compiler DSL.
inkyoto 7 hours ago [-]
OCaml was[0] very popular in the high-frequency trade programming, especially because of its high performance and predictable latency despite having a garbage collector, plus, of course, because of the correctness of the code written in it.
OCaml's GC design is a pretty simple one: two heaps, one for short-lived objects and another one for the long-lived objects, and it is a generational and mostly non-moving design. Another thing that helps tremendously is the fact that OCaml is a functional programming language[1], which means that, since values are never mutated, most GC objects are short or very short-lived and never hit the other heap reserved for the long-lived objects, and the short-lived objects perish often and do so quickly.
So, to recap, OCaml’s GC is tuned for simplicity, predictable short pauses, and easy interoperability, whereas Java’s GC is tuned for maximum throughput and large-scale heap management with sophisticated concurrency and compaction.
[0] Maybe it still is – I am not sure.
[1] It is actually a multiparadigm design, although most code written in OCaml is functional in its style.
gf000 7 hours ago [-]
At the same time, OCaml has a very simplistic memory layout where even integers are boxed - Java at least has primitive types.
That surely has a performance cost.
dmpk2k 7 hours ago [-]
Are you sure about boxed integers? Perhaps you mean floats? As far as I know Ocaml uses the typical integer/pointer divide.
orthoxerox 6 hours ago [-]
IIRC it has 31-bit integers, which means you can't natively work with 32-bit data without widening.
zorobo 2 hours ago [-]
or 63 bits on 64 bit architectures.
fulafel 6 hours ago [-]
Anyone have a link to the Android talk?
I wonder if it was backend code or on-device code. On device you could probably justify the compromises of a no-GC language better.
I don't think it was on-device code, as they talked about porting Python projects. But you can watch the talk to see if I'm misremembering.
lmm 12 hours ago [-]
> I feel if OCaml had got its act together around about 2010 with multicore and a few other annoyances[1] it could have been Rust.
No, that wouldn't have made the difference. No-one didn't pick up OCaml because it didn't have multicore or they were annoyed by the semicolons.
People don't switch languages because the new language is "old language but better". They switch languages because a) new language does some unique thing that the old language didn't do, even if the unique thing is useless and irrelevant, b) new language has a bigger number than old language on benchmarks, even if the benchmark is irrelevant to your use case, or c) new language claims to have great interop with old language, even if this is a lie.
There is no way OCaml could have succeeded in the pop culture that is programming language popularity. Yes, all of the actual reasons to use Rust apply just as much to OCaml and if our industry operated on technical merit we would have migrated to OCaml decades ago. But it doesn't so we didn't.
brabel 2 hours ago [-]
I appreciate both OCaml and Rust, but your view seems to be entirely wrong to me, sorry to be blunt.
People wouldn't care much for Rust at all if it didn't offer two things that are an absolute killer feature (that OCaml does not have):
* no GC, while being memory safe.
* high performance on par with C while offering no-cost high level conveniences.
There's no other language to this day that offers anything like that. Rust really is unique in this area, as far as I know.
The fact that it also has a very good package manager and was initially backed by a big, trusted company, Mozzila, while OCaml comes from a Research Lab, also makes this particular race a no-brainer unless you're into Functional Programming (which has never been very popular, no matter the language).
sanderjd 8 hours ago [-]
Yeah I was initially drawn to rust because I loved ocaml but wished it were more practical.
troupo 17 hours ago [-]
OCaml also needed the brief but bright ReasonML moment to add/fix/improve some of the syntax IIRC and work on user-friendly error messages. But this should've definitely happened much much earlier than it did.
Taikonerd 16 hours ago [-]
I would say ReasonML also needed more follow-through. It seems like the OCaml community hasn't really rallied behind it.
mirekrusin 15 hours ago [-]
Maybe it's good it died? Now we have moonbit lang.
theLiminator 13 hours ago [-]
flix also looks pretty nice
troupo 7 hours ago [-]
Moonbit the fast, compact and user friendly language for WebAssembly? Or Moonbit the language for industrial usage? Or Moonbit the AI-native general-purpose programming language?
(They are the same language)
Reason at least was an active collaboration between several projects in the OCaml space with some feedback into OCaml proper (even though there was a lot of initial resistance IIRC).
bsder 15 hours ago [-]
It doesn't help that the OCaml community also has the problem that a significant minority seem to resent the fact that one company (Jane Street) has written more OCaml than the the rest of the world combined and then some and so de facto controls the ecosystem.
Whereas the Go and Rust communities, for example, were just fine with having corporate sponsorship driving things.
debugnik 14 hours ago [-]
> and so de facto controls the ecosystem
They really don't, less than 5% of opam packages depend on Base and that's their bottom controversial dependency I'd say. Barely anyone's complaining about their work on the OCaml platform or less opinionated libraries. I admit the feeling that they do lingers, but having used OCaml in anger for a few years I think it's a non-issue.
What they do seem to control is the learning pipeline, as a newcomer you find yourself somewhat funneled to Base, Core, etc. I tried them for a while, but eventually understood I don't really need them.
Rapzid 12 hours ago [-]
Golang is interesting.. Hasn't steering loosened up a bit in recent years?
But going way back while yeah the team at Google controlled the direction, there were some pretty significant contributions from outside to channels, garbage collection, and goroutines and scheduling..
77pt77 12 hours ago [-]
> I feel if OCaml had got its act together around about 2010 with multicore and a few other annoyances[1] it could have been Rust
That's about the time-frame where I got into OCaml so I followed this up close.
The biggest hindrance in my opinion is/was boxed types.
Too much overhead for low level stuff, although there was a guy from oxbridge doing GL stuff with it.
13 hours ago [-]
benreesman 17 hours ago [-]
[flagged]
simonask 17 hours ago [-]
This is one of the wilder conspiracy theories to me, and that says a lot in 2025.
What is that link? A thread from 2017? What am I supposed to get from it?
the__alchemist 16 hours ago [-]
It's real, and as someone who loves rust, it's embarrassing, and difficult to avoid. The OSS embedded rust users in particular are nuts.
benreesman 16 hours ago [-]
Really appreciate a fairminded Rust person chiming in. It's a disservice to the language (which is cool!), and it makes the community look bad.
Rust might gain some adoption among people who are new to high performance software and see a narrative that its the only game in town, but it turns off a lot of older folks like myself who know it isn't and that community matters.
gl to you and people like you trying to get it back on track!
simonask 8 hours ago [-]
Can we be clear here: You are accusing people in this thread of "inorganically" bringing up Rust. The implication being that there is some shadowy group or organization coordinating the bombing of random forum threads to talk about Rust, or what are we saying here?
Rust is an exciting language. It comes up because many people like it.
birdfood 12 hours ago [-]
OCaml is probably my favourite language.
The most involved project I did with it was a CRUD app for organising Writer's Festivals.
The app was 100% OCaml (ReasonML so I could get JSX) + Dream + HTMX + DataTables. I used modules to get reusable front end templates. I loved being able to make a change to one of my data models and have the compiler tell me almost instantly where the change broke the front end. The main value of the app was getting data out of excel into a structured database, but I was also able to provide templated and branded itineraries in .odt format, create in memory zipped downloads so that I didn't need to touch the server disk. I was really impressed by how much I could achieve with the ecosystem.
But having to write all my database queries in strings and then marshal the data through types was tiring (and effectively not compile time type checked) and I had to roll my own auth. I often felt like I was having to work on things that were not core to the product I was trying to build.
I've spent a few years bouncing around different languages and I think my take away is that there is no perfect language. They all suck in their own special way.
Now I'm building an app just for me and I'm using Rails. Pretty much everything I've wanted to reach for has a good default answer. I really feel like I'm focused on what is relevant to the product I'm building and I'm thinking about things unrelated to language like design layout and actually shipping the thing.
_mu 19 hours ago [-]
I haven't worked in OCaml but I have worked a bit in F# and found it to be a pleasant experience.
One thing I am wondering about in the age of LLMs is if we should all take a harder look at functional languages again. My thought is that if FP languages like OCaml / Haskell / etc. let us compress a lot of information into a small amount of text, then that's better for the context window.
Possibly we might be able to put much denser programs into the model and one-shot larger changes than is achievable in languages like Java / C# / Ruby / etc?
jappgar 18 hours ago [-]
That was my optimistic take before I started working on a large Haskell code base.
Aside from the obvious problem that there's not enough FP in the training corpus, it seems like terser languages don't work all that well with LLMs.
My guess is that verbosity actually helps the generation self-correct... if it predicts some "bad" tokens it can pivot more easily and still produce working code.
sshine 17 hours ago [-]
> terser languages don't work all that well with LLMs
I’d believe that, but I haven’t tried enough yet. It seems to be doing quite well with jq. I wonder how its APL fares.
When Claude generates Haskell code, I constantly want to reduce it. Doing that is a very mechanical process; I wonder if giving an agent a linter would give better results than overloading it all to the LLM.
gylterud 6 hours ago [-]
I usually treat the LLM generated Haskell code as a first draft.
The power of Haskell in this case is the fearless refactoring the strong type system enables. So even if the code generated is not beautiful, it can sit there and do a job until the surrounding parts have taken shape, and then be refactored into something nice when I have a moment to spare.
willhslade 14 hours ago [-]
Apl is executed right to left and LLMS.... Aren't.
Vosporos 4 hours ago [-]
Can't you just run HLint on it?
yawaramin 17 hours ago [-]
There's actually a significant difference between Haskell and OCaml here so we can't lump them together. OCaml is a significantly simpler, and moderately more verbose, language than Haskell. That helps LLMs when they do codegen.
b_e_n_t_o_n 17 hours ago [-]
This has been my experience as well. Ai writes Go better than any language besides maybe html and JavaScript/python.
byw 12 hours ago [-]
I wonder if it has more to do with larger training data than the languages themselves.
gylterud 5 hours ago [-]
I have found that Haskell has two good things going for it when it comes to LLM code generation. Both have to do with correctness.
The expressive type system catches a lot of mistakes, and the fact that they are compile errors which can be fed right into the LLM again means that incorrect code is caught early.
The second is property based testing. With it I have had the LLM generate amazingly efficient, correct code, by iteratively making it more and more efficient – running quickcheck on each pass. The LLM is not super good at writing the tests, but if you add some yourself, you quickly root out any mistakes in the generated code.
akoboldfrying 2 hours ago [-]
Property-based testing is available in other languages. E.g., JS has fast-check, inspired by quickcheck.
gylterud 2 hours ago [-]
The way code is written in Haskell, small laser focused functions and clearly defined and mockable side effects, lends itself very well to property based testing.
This might not be impossible to achieve in other languages, but I haven’t seen it used as prevailently in other languages.
gf000 18 hours ago [-]
My completely non-objective experiment of writing a simple CLI game in C++ and Haskell shows that the lines of code were indeed less in case of Haskell.. but the number of words were roughly the same, meaning the Haskell code just "wider" instead of "higher".
And then I didn't even make this "experiment" with Java or another managed, more imperative language which could have shed some weight due to not caring about manual memory management.
So not sure how much truth is in there - I think it differs based on the given program: some lend itself better for an imperative style, others prefer a more functional one.
QuadmasterXLII 23 minutes ago [-]
My experience is that width is faster than height to type- mostly from lack of time spent indenting. This is _completely_ fixed by using a decent auto-formatter, but at least for me the bias towards width lingers on, because it took me years to notice that I needed an auto-formatter
Buttons840 13 hours ago [-]
If LLMs get a little better at writing code, we might want to use really powerful type systems and effect systems to limit what they can do and ensure it is correct.
For instance, dependent types allow us to say something like "this function will return a sorted list", or even "this function will return a valid Sudoku solution", and these things will be checked at compile time--again, at compile time.
Combine this with an effect system and we can suddenly say things like "this function will return a valid Sudoku solution, and it will not access the network or filesystem", and then you let the LLM run wild. You don't even have to review the LLM output, if it produces code that compiles, you know it works, and you know it doesn't access the network or filesystem.
Of course, if LLMs get a lot better, they can probably just do all this in Python just as well, but if they only get a little better, then we might want to build better deterministic systems around the unreliable LLMs to make them reliable.
gylterud 2 hours ago [-]
The day when LLMs generate useful code with dependent types! That would be awesome!
omcnoe 5 hours ago [-]
I think that functional languages do actually have some advantages when it comes to LLM's, but not due to terseness.
Rather, immutability/purity is a huge advantage because it plays better with the small context window of LLM's. An LLM then doesn't have to worry about side effects or mutable references to data outside the scope currently being considered.
dkarl 16 hours ago [-]
In Scala, I've had excellent luck using LLMs to speed up development when I'm using cats-effect, an effects library.
My experience in the past with something like cats-effect has been that there are straightforward things that aren't obvious, and if you haven't been using it recently, and maybe even if you've been using it but haven't solved a similar problem recently, you can get stuck trawling through the docs squinting at type signatures looking for what turns out to be, in hindsight, an elegant and simple solution. LLMs have vastly reduced this kind of friction. I just ask, "In cats-effect, how do I...?" and 80% of the time the answer gets me immediately unstuck. The other 20% of the time I provide clarifying context or ask a different LLM.
I haven't done enough maintenance coding yet to know if this will radically shift my view of the cost/benefit of functional programming with effects, but I'm very excited. Writing cats-effect code has always been satisfying and frustrating in equal measure, and so far, I'm getting the confidence and correctness with a fraction of the frustration.
I haven't unleashed Claude Code on any cats-effect code yet. I'm curious to see how well it will do.
sshine 18 hours ago [-]
> My thought is that if FP languages like OCaml / Haskell / etc. let us compress a lot of information into a small amount of text, then that's better for the context window.
Claude Code’s Haskell style is very verbose; if-then-elsey, lots of nested case-ofs, do-blocks at multiple levels of intension, very little naming things at top-level.
Given a sample of a simple API client, and a request to do the same but for another API, it did very well.
I concluded that I just have more opinions about Haskell than Java or Rust. If it doesn’t look nice, why even bother with Haskell.
I reckon that you could seed it with style examples that take up very little context space. Also, remind it to not enable language pragmas per file when they’re already in .cabal, and similar.
esafak 18 hours ago [-]
I think LLMs benefit from training examples, static typing, and an LSP implementation more than terseness.
nextos 18 hours ago [-]
Exactly. My experience building a system that generates Dafny and Liquid Haskell is that you can get much further than with a language that is limited to dynamic or simple static types.
nukifw 18 hours ago [-]
To be completely honest, I currently only use LLMs to assist me in writing documentation (and translating articles), but I know that other people are looking into it: https://anil.recoil.org/wiki?t=%23projects
d4mi3n 18 hours ago [-]
I think this is putting the cart before the horse. Programs are generally harder to read than they are to write, so optimizing for concise output to benefit the tool at the potential expense of the human isn't a trade I'd personally make.
Granted, this may just be an argument for being more comfortable reading/writing code in a particular style, but even without the advantages of LLMs adoption of functional paradigms and tools has been a struggle.
seprov 16 hours ago [-]
Procedures can be much more concise in functional/ML syntax, but many things are not -- dependency injection in languages like C# for example are able to be much less verbose because of really excellent DI libraries and (arguably more sane) instance constructor syntax.
pmahoney 14 hours ago [-]
I tried to like OCaml for a few years. The things that hold me back the most are niggling things that are largely solved in more "modern" langs, the biggest being the inability to "print" arbitrary objects.
There are ppx things that can automatically derive "to string" functions, but it's a bit of effort to set up, it's not as nice to use as what's available in Rust, and it can't handle things like Set and Map types without extra work, e.g. [1] (from 2021 so situation may have changed).
Compare to golang, where you can just use "%v" and related format strings to print nearly anything with zero effort.
Go's %v leaves a lot to be desired, even when using %+#v to print even more info. I wish there was a format string to deeply traverse into pointers. Currently I have to import go-spew for that, which is a huge annoyance.
Python does it best from what I've seen so far, with its __repr__ method.
tucnak 4 hours ago [-]
Go has both Stringer and GoStringer interfaces, which is basically the same thing as __repr__.
JaggerJo 4 hours ago [-]
DarkLang which was initially written in OCaml eventually switched to F#. From what I remember the main reasons were the library ecosystem and concurrency.
I'm know .NET in and out, so I might be biased. Most of the boring parts have multiple good solutions that I can pick from. I don't have to spend time on things that are not essential to the problem I actually want to solve.
I've used F# professionally for multiple years and maintain a quite popular UI library written in it. But even with .NET there still are gaps because of the smaller F# language ecosystem. Not everything "just works" between CLR languages - sometimes it's a bit more complicated.
The main point I'm trying to make is that going off the beaten path (C#) for example also comes with a cost. That cost might or might not be offset by the more expressive language. It's important to know this so you are not surprised by it.
With OCaml it's similar I'd say. You get a really powerful language, but you're off the beaten path. Sure, there are a few companies using it in production - but their use case might be different than yours. On Jane Streets Threads and Signals podcast they often talk about their really specific use cases.
What a brilliant article, it really puts to rest for me, the whole “why not use F#?” argument. In almost every OCaml thread, someone suggests F# as a way to sidestep OCaml’s tooling.
I’ve always been curious about OCaml, especially since some people call it “Go with types” and I’m not a fan of writing Rust. But I’m still not sold on OCaml as a whole, its evangelists just don’t win me over the way the Erlang, Ruby, Rust, or Zig folks do. I just cant see the vision
debugnik 17 hours ago [-]
Funny, I moved to OCaml to sidestep F# tooling. At least last time I used F#: Slow compiler, increasingly C#-only ecosystem, weak and undocumented MSBuild (writing custom tasks would otherwise be nice!), Ionide crashes, Fantomas is unsound...
But OCaml sadly can't replace F# for all my use cases. F# does get access to many performance-oriented features that the CLR supports and OCaml simply can't, such as value-types. Maybe OxCaml can fix that long term, but I'm currently missing a performant ML-like with a simple toolchain.
JaggerJo 4 hours ago [-]
Did you try F# in JetBrains Rider? It's the best F# tooling you can buy IMO.
debugnik 2 hours ago [-]
Yes I actually ended up using Rider, although I don't like switching editors. But that only replaces Ionide, and I was having some growing pains with the entire toolchain.
JaggerJo 1 hours ago [-]
Yeah - I worked on an F# project that was ~300K LoC and tooling speed really becomes an issue at that point.
joshmarlow 16 hours ago [-]
It's been a few years since I've touched OCaml - the ecosystem just wasn't what I wanted - but the core language is still my favorite.
And the best way I can describe why is that my code generally ends up with a few heavy functions that do too much; I can fix it once I notice it, but that's the direction my code tends to go in.
In my OCaml code, I would look for the big function and... just not find it. No single workhorse that does a lot - for some reason it was just easier for me to write good code.
Now I do Rust for side projects because I like the type system - but I would prefer OCaml.
I keep meaning to checkout F# though for all of these reasons.
jasperry 16 hours ago [-]
Question about terminology: Is it common to call higher-order function types "exponential types" as the article does? I know what higher-order functions are, but am having trouble grasping why the types would be called "exponential".
xigoi 12 hours ago [-]
ackfoobar has already given a good reason why function types are called exponential, but there is an even deeper reason: function types interact algebraically the same way as exponents.
The type A → (B → C) is isomorphic to (A × B) → C (via currying). This is analogous to the rule (cᵇ)ᵃ = cᵇ˙ᵃ.
The type (A + B) → C is isomorphic to (A → C) × (B → C) (a function with a case expression can be replaced with a pair of functions). This is analogous to the rule cᵃ⁺ᵇ = cᵃ·cᵇ.
ackfoobar 16 hours ago [-]
A first-order function type is already exponential.
A sum type has as many possible values as the sum of its cases. E.g. `A of bool | B of bool` has 2+2=4 values. Similarly for product types and exponential types. E.g. the type bool -> bool has 2^2=4 values (id, not, const true, const false) if you don't think about side effects.
jolmg 16 hours ago [-]
> bool -> bool has 2^2=4 values
Not the best example since 2*2=4 also.
How about this bit of Haskell:
f :: Bool -> Maybe Bool
That's 3 ^ 2 = 9, right?
f False = Nothing
f False = Just True
f False = Just False
f True = Nothing
f True = Just True
f True = Just False
Those are 6. What would be the other 3? or should it actually be a*b=6?
EDIT: Nevermind, I counted wrong. Here are the 9:
f x = case x of
True -> Nothing
False -> Nothing
f x = case x of
True -> Nothing
False -> Just False
f x = case x of
True -> Nothing
False -> Just True
f x = case x of
True -> Just False
False -> Nothing
f x = case x of
True -> Just False
False -> Just False
f x = case x of
True -> Just False
False -> Just True
f x = case x of
True -> Just True
False -> Nothing
f x = case x of
True -> Just True
False -> Just False
f x = case x of
True -> Just True
False -> Just True
ackfoobar 16 hours ago [-]
Good point, well there's Ordering type built-in in Haskell (LT | EQ | GT). Ordering -> bool has 2^3=8 values (const true, const false, == LT, == EQ, == GT, is_lte, is_gte, ne)
EDIT: now you see why I used the smallest type possible to make my point. Exponentials get big FAST (duh).
jasperry 16 hours ago [-]
You didn't list all functions, just input-output pairs. Each function is a map from every possible input to an output:
f1 False = Nothing, f1 True = Nothing
f2 False = Nothing, f2 True = Just True
...
This gives the correct 3^2 = 9 functions.
nukifw 16 hours ago [-]
Usually we speaking only about sum and product (because article usually refers to ADT, so Algebraic Data type). A function is not really Data, so it is not included. But you can use the same tricks (ie: a -> b has arity b^a) to compute the number of potential inhabitant
voidhorse 2 hours ago [-]
The answers in the replies are all good but the real reason is because in category theory the construct that models function types is called an "exponential product". The choice of that name stems from the reasons explored in the replies, in particular from the fact that the number of total functions from A to B is aka ways determined by an exponent (cardinality of B raised to the power of cardinality of A)
vram22 14 hours ago [-]
I had the same doubt.
Here is my uneducated guess:
In math, after sum and product, comes exponent :)
So they may have used that third term in an analogous manner in the example.
ackfoobar 19 hours ago [-]
> Sum types: For example, Kotlin and Java (and de facto C#) use a construct associated with inheritance relations called sealing.
This has the benefit of giving you the ability to refer to a case as its own type.
> the expression of sums verbose and, in my view, harder to reason about.
You declare the sum type once, and use it many times. Slightly more verbose sum type declaration is worth it when it makes using the cases cleaner.
nukifw 18 hours ago [-]
In the specific case of OCaml, this is also possible using indexing and GADTs or polymorphic variants. But generally, referencing as its own type serves different purposes. From my point of view, distinguishing between sum branches often tends to result in code that is difficult to reason about and difficult to generalise due to concerns about variance and loss of type equality.
ackfoobar 18 hours ago [-]
Unless you reach an unsound part of the type system I don't see how. Could you provide an example?
```ocaml
type _ treated_as =
| Int : int -> int treated_as
| Float : float -> float treated_as
let f (Int x) = x + 1 (* val f : int treated_as -> int *)
```
```ocaml
let f = function
| `Foo x -> string_of_int (x + 1)
| `Bar x -> x ^ "Hello"
(* val f : [< `Foo of int | `Bar of string] -> string` *)
let g = function
| `Foo _ -> ()
| _ -> ()
(* val g : [> `Foo of 'a ] -> unit *)
```
(Notice the difference between `>` and `<` in the signature?)
And since OCaml has also an object model, you can also encoding sum and sealing using modules (and private type abreviation).
ackfoobar 16 hours ago [-]
Oh if you use those features to express what "sum type as subtyping" can, it sure gets confusing. But it's not those things that I want to express that are hard to reason about, the confusing part is the additions to the HM type system.
A meta point: it seems to me that a lot of commenters in my thread don't know that vanilla HM cannot express subtypes. This allows the type system to "run backwards" and you have full type inference without any type annotations. One can call it a good tradeoff but it IS a tradeoff.
nukifw 15 hours ago [-]
Yes and my point was, when you want what you present in the first comment, quoting my post, you have tools for that, available in OCaml. But there is cases, when you do not want to treat each branch of your constructors "as a type", when the encoding of visitors is just rough. This is why I think it is nice to have sum type, to complete product type. So i am not sure why we are arguing :)
ackfoobar 15 hours ago [-]
> So i am not sure why we are arguing :)
I think we agree on a lot of points. The rest is mostly preferences. Some other comments in my thread though...
nukifw 15 hours ago [-]
Ok! (BTW, thanks for the interaction!)
sunnydiskincali 17 hours ago [-]
> This has the benefit of giving you the ability to refer to a case as its own type.
A case of a sum-type is an expression (of the variety so-called a type constructor), of course it has a type.
datatype shape =
Circle of real
| Rectangle of real * real
| Point
Circle : real -> shape
Rectangle : real * real -> shape
Point : () -> shape
A case itself isn't a type, though it has a type. Thanks to pattern matching, you're already unwrapping the parameter to the type-constructor when handling the case of a sum-type. It's all about declaration locality. (real * real) doesn't depend on the existence of shape.
The moment you start ripping cases as distinct types out of the sum-type, you create the ability to side-step exhaustiveness and sum-types become useless in making invalid program states unrepresentable. They're also no longer sum-types. If you have a sum-type of nominally distinct types, the sum-type is contingent on the existence of those types. In a class hierarchy, this relationship is bizarrely reversed and there are knock-on effects to that.
> You declare the sum type once, and use it many times.
And you typically write many sum-types. They're disposable. And more to the point, you also have to read the code you write. The cost of verbosity here is underestimated.
> Slightly more verbose sum type declaration is worth it when it makes using the cases cleaner.
C#/Java don't actually have sum-types. It's an incompatible formalism with their type systems.
Anyways, let's look at these examples:
C#:
public abstract record Shape;
public sealed record Circle(double Radius) : Shape;
public sealed record Rectangle(double Width, double Height) : Shape;
public sealed record Point() : Shape;
double Area(Shape shape) => shape switch
{
Circle c => Math.PI * c.Radius * c.Radius,
Rectangle r => r.Width * r.Height,
Point => 0.0,
_ => throw new ArgumentException("Unknown shape", nameof(shape))
};
ML:
datatype shape =
Circle of real
| Rectangle of real * real
| Point
val result =
case shape of
Circle r => Math.pi * r * r
| Rectangle (w, h) => w * h
| Point => 0.0
They're pretty much the same outside of C#'s OOP quirkiness getting in it's own way.
ackfoobar 16 hours ago [-]
> The moment you start ripping cases as distinct types out of the sum-type, you create the ability to side-step exhaustiveness and sum-types become useless in making invalid program states unrepresentable.
Quite the opposite, that gives me the ability to explicitly express what kinds of values I might return. With your shape example, you cannot express in the type system "this function won't return a point". But with sum type as sealed inheritance hierarchy I can.
> C#/Java don't actually have sum-types.
> They're pretty much the same
Not sure about C#, but in Java if you write `sealed` correctly you won't need the catch-all throw.
If they're not actual sum types but are pretty much the same, what good does the "actually" do?
tomsmeding 15 hours ago [-]
> Not sure about C#, but in Java if you write `sealed` correctly you won't need the catch-all throw.
Will the compiler check that you have handled all the cases still? (Genuinely unsure — not a Java programmer)
> with pattern matching for switch (JEP 406)the compiler can confirm that every permitted subclass of Shape is covered, so no default clause or other total pattern is needed. The compiler will, moreover, issue an error message if any of the three cases is missing
brabel 1 hours ago [-]
Yes, that's the whole purpose of marking an interface/class `sealed`.
sunnydiskincali 14 hours ago [-]
> With your shape example, you cannot express in the type system "this function won't return a point".
Sure you can, that's just subtyping. If it returns a value that's not a point, the domain has changed from the shape type and you should probably indicate that.
structure Shape = struct
datatype shape =
Circle of real
| Rectangle of real * real
| Point
end
structure Bound = struct
datatype shape =
Circle of real
| Rectangle of real * real
end
This is doing things quick and dirty. For this trivial example it's fine, and I think a good example of why making sum-types low friction is a good idea. It completely changes how you solve problems when they're fire and forget like this.
That's not to say it's the only way to solve this problem, though. And for heavy-duty problems, you typically write something like this using higher-kinded polymorphism:
signature SHAPE_TYPE = sig
datatype shape =
Circle of real
| Rectangle of real * real
| Point
val Circle : real -> shape
val Rectangle : real * real -> shape
val Point : shape
end
functor FullShape () : SHAPE_TYPE = struct
datatype shape =
Circle of real
| Rectangle of real * real
| Point
val Circle = Circle
val Rectangle = Rectangle
val Point = Point
end
functor RemovePoint (S : SHAPE_TYPE) :> sig
type shape
val Circle : real -> shape
val Rectangle : real * real -> shape
end = struct
type shape = S.shape
val Circle = S.Circle
val Rectangle = S.Rectangle
end
structure Shape = FullShape()
structure Bound = RemovePoint(Shape)
This is extremely overkill for the example, but it also demonstrates a power you're not getting out of C# or Java without usage of reflection. This is closer to the system of inheritance, but it's a bit better designed. The added benefit here over reflection is that the same principle of "invalid program states are unrepresentable" applies here as well, because it's the exact same system being used. You'll also note that even though it's a fair bit closer conceptually to classes, the sum-type is still distinct.
Anyways, in both cases, this is now just:
DoesNotReturnPoint : Shape.shape -> Bound.shape
Haskell has actual GADTs and proper higher kinded polymorphism, and a few other features where this all looks very different and much terser. Newer languages bake subtyping into the grammar.
> If they're not actual sum types but are pretty much the same, what good does the "actually" do?
Conflation of two different things here. The examples given are syntactically similar, and they're both treating the constituent part of the grammar as a tagged union. The case isn't any cleaner was the point.
However in the broader comparison between class hierarchies and sum-types? They're not similar at all. Classes can do some of the things that sum-types can do, but they're fundamentally different and encourage a completely different approach to problem-solving, conceptualization and project structure... in all but the most rudimentary examples. As I said, my 2nd example here is far closer to a class-hierarchy system than sum-types, though it's still very different. And again, underlining that because of the properties of sum-types, thanks to their specific formalization, they're capable of things class hierarchies aren't. Namely, enforcing valid program-states at a type-level. Somebody more familiar with object-oriented formalizations may be a better person to ask than me on why that is the case.
It's a pretty complicated space to talk about, because these type systems deviate on a very basic and fundamental level. Shit just doesn't translate well, and it's easy to find false friends. Like how the Japanese word for "name" sounds like the English word, despite not being a loan word.
ackfoobar 14 hours ago [-]
You wrote a lot of words to say very little.
Anyway, to translate your example:
sealed interface Shape permits Point, Bound {}
final class Point implements Shape {}
sealed interface Bound extends Shape permits Circle, Rectangle {}
record Circle(double radius) implements Bound {}
record Rectangle(double width, double height) implements Bound {}
A `Rectangle` is both a `Bound` (weird name choice but whatever), and a `Shape`. Thanks to subtyping, no contortion needed. No need to use 7 more lines to create a separate, unrelated type.
> the Japanese word for "name" sounds like the English word, despite not being a loan word.
Great analogy, except for the fact that someone from the Java team explicitly said they're drawing inspirations from ML.
I see the same degree of contortion, actually. Far more noisy, at that.
> No need to use 7 more lines to create a separate, unrelated type.
You're still creating a type, because you understand that a sum-type with a different set of cases is fundamentally a different type. Just like a class with a different set of inheritance is a different type. And while it's very cute to compress it all into a single line, it's really not compelling in the context of readability and "write once, use many". Which is the point you were making, although it was on an entirely different part of the grammar.
> Great analogy, except for the fact that someone from the Java team explicitly said they're drawing inspirations from ML.
ML didn't invent ADTs, and I think you know it's more than disingenuous to imply the quotation means that the type-system in Java which hasn't undergone any fundamental changes in the history of the language (nor could it without drastically changing the grammar of the language and breaking the close relationship to the JVM) was lifted from ML.
ackfoobar 13 hours ago [-]
> Substantiate this.
You never gave an example how sum types in Java/Kotlin cannot do what "real" sum types can.
>> weird name choice but whatever
> snarky potshot
Sorry that you read snark. What I meant was "I find naming this 'Bound' weird. But since I am translating your example, I'll reuse it".
> You're still creating an unrelated type
How can a type participating in the inheritance hierarchy be "unrelated"?
> I see the same degree of contortion, actually. Far more noisy, at that.
At this point I can only hope you're a Haskeller and do not represent an average OCaml programmer.
voidhorse 10 hours ago [-]
I'm not sure why people are debating the merits of sum types versus sealed types in response to this. I prefer functional languages myself, but you are entirely correct that sealed types can fully model sum types and that the type level discrimination you get for free via subtyping makes them slightly easier to define and work with than sum types reliant on polymorphism.
Operationally these systems and philosophies are quite different, but mathematically we are all working in more work less an equivalent category and all the type system shenanigans you have in FP are possible in OOP modulo explicit limits placed on the language and vice versa.
ackfoobar 9 hours ago [-]
> I'm not sure why
Me neither.
> you are entirely correct that sealed types can fully model sum types
I want to be wrong, in that case I learn something new.
wiseowise 18 hours ago [-]
> Slightly more verbose sum type declaration is worth it *when it makes using the cases cleaner.*
Correct. This is not the case when you talk about Java/Kotlin. Just ugliness and typical boilerplate heavy approach of JVM languages.
ackfoobar 18 hours ago [-]
> Just ugliness and typical boilerplate heavy approach of JVM languages.
I have provided a case how using inheritance to express sum types can help in the use site. You attacked without substantiating your claim.
wiseowise 18 hours ago [-]
Kotlin's/Java's implementation is just a poor man's implementation of very restricted set of real sum types. I have no idea what
> This has the benefit of giving you the ability to refer to a case as its own type.
means.
ackfoobar 18 hours ago [-]
> I have no idea
I can tell.
Thankfully the OCaml textbook has this explicitly called out.
> The main downside is the obvious one, which is that an inline record can’t be treated as its own free-standing object. And, as you can see below, OCaml will reject code that tries to do so.
wiseowise 18 hours ago [-]
That's for embedded records. You can have the same thing as Kotlin but with better syntax.
ackfoobar 17 hours ago [-]
If you don't do inline records you either
- create a separate record type, which is no less verbose than Java's approach
- use positional destructuring, which is bug prone for business logic.
Also it's funny that you think OCaml records are "with better syntax". It's a weak part of the language creating ambiguity. People work around this qurik by wrapping every record type in its own module.
You mistyped "backwards compatible change" going back to close to 3 decades.
nine_k 17 hours ago [-]
I wish somebody with this amount of experience would compare the benefits / shortcomings of using the ReasonML syntax. (The article mentions it once, in passing.)
ReasonML has custom operators that allows for manipulating monads somewhat sanely (>>= operators and whatnot). rescript (reasonml’s “fork”) did not last time I checked. But it does have an async/await syntax which helps a lot with async code. reasonml did not last time I checked, so you had to use raw promises.
I believe Melange (which the article briefly talks about) supports let bindings with the reason syntax.
And this kinda changes everything if you React. Because you can now have sane JSX with let bindings. Which you could not until melange. Indeed, you can PPX your way out of it in ocaml syntax, but I’m not sure the syntax highlight works well in code editors. It did not on mine anyway last time I checked.
So for frontend coding, Melange’s reason ml is great as you have both, and let bindings can approximate quite well async syntax on top of writing readable monadic code.
For backend code, as a pythonista, I hate curlies. and I do like parenthesis-less function calls and definitions a lot. But I still have a lot of trouble, as a beginner ocamler, with non-variable function argument as I need to do “weird” parenthesis stuff.
Hope this “helps”!
hardwaregeek 17 hours ago [-]
I like Reason syntax and I wish it was more common, but I think if you want to engage in the OCaml community it’s probably better to just bite the bullet and use the standard syntax. It’s what almost everybody uses so you’ll need to understand it to read any code or documentation in the ecosystem
nukifw 16 hours ago [-]
Sorry, I never used ReasonML so I don't see any advantage of using ReasonML except it had the time to die twice in 4 years :)
myaccountonhn 16 hours ago [-]
I don't have extensive experience but the little I did was issues with LSP not working as well.
raphinou 18 hours ago [-]
Some years ago I also wanted to make ocaml my primary language, but rapidly encountered problems: difficulty to install (on Linux due to the requirement of a very unusual tool which name and function I forgot), no response from community regarding how to solve that problem, no solid postgresql driver, ....
Wanting to use a functional language I pivoted to fsharp, which was not the expected choice for me as I use Linux exclusively. I have been happy with this choice, it has even become my preferred language. The biggest problem for me was the management of the fsharp community, the second class citizen position of fsharp in the DotNet ecosystem, and Microsoft's action screwing the goodwill of the dev community (eg hot reload episode). I feel this hampered the growth of the fsharp community.
I'm now starting to use rust, and the contrast on these points couldn't be bigger.
Edit: downvoters, caring to share why? I thought sharing my experience would have been appreciated. Would like to know why I was wrong.
"use opam" is always the answer but in reality its the worst package manager ever. I've never seen so many packages fail to install, so many broken dependencies and miscompilations that resulted in segfaults due to wrong dependencies. I just gave up with Ocaml due to the crappy ecosystem, although I could have lived with the other idiosyncrasies.
sestep 17 hours ago [-]
And even if you do get opam working for a project, it's not at all reproducible and will just randomly break at some point in the future. For instance, I had this in a Dockerfile for one project:
RUN eval $(opam env) && opam install --yes dune=3.7.0
One day the build just randomly broke. Had to replace it with this:
RUN eval $(opam env) && opam install --yes dune=3.19.1
Not a big change, but the fact that this happens at all is just another part of building with OCaml feeling like building on a foundation of sand. Modern languages have at least learned how to make things reproducible with e.g. lockfiles, but OCaml has not.
johnisgood 17 hours ago [-]
Use "opam lock" and "opam pin". Additionally, Dune's lockdir feature uses opam's solver internally to generate a lock directory containing ".opam.locked" files for every dependency. This is Dune's way of having fully reproducible builds without relying on opam's switch state alone.
Dune is not a dependency manager, it is a build tool. Opam is the dependency manager. By default, Dune doesn't fetch dependencies, opam does that. That said, Dune does use opam, yeah.
nukifw 17 hours ago [-]
And the next milestone of Dune is to become an alternative package manager via Dune package Management, using a store in a "nixish" way.
johnisgood 17 hours ago [-]
I suppose it is still going to use opam internally, right?
yawaramin 17 hours ago [-]
No, it is doing its own package management implementation.
johnisgood 17 hours ago [-]
So dune is going to replace opam? Where can I read more about this if so?
I have never had issues and been writing OCaml and using opam for years.
Can you be more specific?
(Why the down-vote? He does need to be specific, right? Additionally, my experiences are somehow invalid?)
debugnik 17 hours ago [-]
I mean, it's quite clunky, but on Linux or WSL I've never had the broken experience you talk about. Could you share your setup? Was this maybe on bare macOS or Windows, in which case I totally believe it because they've been neglected?
Milpotel 5 hours ago [-]
No, just Linux or inside Docker (on Linux).
davidwritesbugs 17 hours ago [-]
I abandoned ocaml just because I couldn't get a stepping debugger to work. Can't remember the exact issues but I tried to install in vscode to no avail & I've no interest in emacs
tempodox 5 hours ago [-]
The best bird's eye overview of OCaml I've seen yet. As a long-time OCaml user, I find it entirely fair.
manoDev 17 hours ago [-]
I'm sure there's merit to the language, but the syntax seems absolutely alien to me. Some attempt to look like verbose imperative code, a bunch of semicolons, and for some strange reason, hate of parenthesis.
Real life sample:
let print_expr exp =
(* Local function definitions *)
let open_paren prec op_prec =
if prec > op_prec then print_string "(" in
let close_paren prec op_prec =
if prec > op_prec then print_string ")" in
let rec print prec exp = (* prec is the current precedence *)
match exp with
Const c -> print_float c
| Var v -> print_string v
| Sum(f, g) ->
open_paren prec 0;
print 0 f; print_string " + "; print 0 g;
close_paren prec 0
| Diff(f, g) ->
open_paren prec 0;
print 0 f; print_string " - "; print 1 g;
close_paren prec 0
| Prod(f, g) ->
open_paren prec 2;
print 2 f; print_string " * "; print 2 g;
close_paren prec 2
| Quot(f, g) ->
open_paren prec 2;
print 2 f; print_string " / "; print 3 g;
close_paren prec 2
in print 0 exp;;
A function is defined as:
let print_expr exp =
That seems pretty hard to read at a glance, and easy to mistype as a definition.
That looks like a language you really want an IDE helping you with.
myaccountonhn 16 hours ago [-]
That code is probably some of the hardest you'll encounter in Ocaml, but for me its quite obvious what it does and easy to read because I've worked with GADTs before. If you haven't then its you'll need to study and understand them to understand the code.
I actually really like the syntax of OCaml, its very easy to write and when you're used to it, easy to read (easier than reasonml IMO).
Double semicolons are afaik only used in the repl.
jallmann 15 hours ago [-]
> That seems pretty hard to read at a glance, and easy to mistype as a definition.
YMMV but let expressions are one of the nice things about OCaml - the syntax is very clean in a way other languages aren't. Yes, the OCaml syntax has some warts, but let bindings aren't one of them.
It's also quite elegant if you consider how multi-argument let can be decomposed into repeated function application, and how that naturally leads to features like currying.
> Also, you need to end the declaration with `in`?
Not if it's a top level declaration.
It might make more sense if you think of the `in` as a scope operator, eg `let x = v in expr` makes `x` available in `expr`.
> Then, semicolons...
Single semicolons are syntactic sugar for unit return values. Eg,
print_string "abc";
is the same as
let () = print_string "abc" in
javcasas 16 hours ago [-]
It also uses double semicolon because single semicolon is already used as statement separator kinda like C. So double semicolon is top-level statement terminator.
javcasas 16 hours ago [-]
`let` defines a value. Some values are "plain old values", some are functions.
The syntax is `let <constantname> <parameters> = <value> in <expression>;;`
where `expression` can also have `let` bindings.
So you can have
let one = 1 in
let two = 2 in
let three = one + two in
print_endline (string_of_int three);;
jallmann 15 hours ago [-]
The way I always think about it is in terms of scope. With:
let x = v in expr
`x` is now available for use in `expr`
In essence, an OCaml program is a giant recursive expression, because `expr` can have its own set of let definitions.
In the REPL, this is where the double semicolons come in, as a sort of hint to continue processing after the expression.
UncleOxidant 16 hours ago [-]
I mean, I guess it depends on your background, but that code looks pretty nice compared to how it would look in a language without pattern matching and ADTs. This is why the MLs excel for things like parsers, interpreters, and compilers. Beauty is in the eye of the beholder, I guess. I suspect that if you gave it a bit of time it would start to really grow on you - that's how it was in my case. At first: "WTF is this weird syntax?!", a few weeks in "oh, this makes a lot of sense, actually", a few years "Yeah, I'd much rather write this sort of thing in OCaml (or an ML in general)"
manoDev 16 hours ago [-]
I don't find the ADT and matching part weird, but rather everything else (as mentioned).
chris_armstrong 12 hours ago [-]
As someone who uses OCaml for hobby projects, I appreciate how little the language gets in your way when you want to just “get shit done”, despite the language’s origins in academia and industrial uses.
The type system usually means that I might take longer to get my code to compile, but that I won’t spend much (if any) time debugging it once I’m done.
I’m in the middle of pulling together bits of a third party library and refactoring them over several days work, and I’m pretty confident that most of the issues I’ll face when done will be relatively obvious runtime ones.
chris_armstrong 12 hours ago [-]
I almost never find a use for GADTs or functors or carefully specifying module types, but when I need them, they help me get stuff done neatly.
Even the object system which most OCaml developers avoid, is actually very useful for some specific modelling scenarios (usually hierarchies in GUIs or IaC) that comes with similar type system guarantees and minimal type annotations.
rybosome 11 hours ago [-]
I’d have liked to see the use of dependency injection via the effects system expanded upon. The idea that the example program could use pattern matching to bind to either test values or production ones is interesting, but I can’t conceptualize what that would look like with the verbal description alone.
Also, I had no idea that the module system had its own type system, that’s wild.
mrkeen 2 hours ago [-]
Haskeller here!
> The idea that the example program could use pattern matching to bind to either test values or production ones is interesting, but I can’t conceptualize what that would look like with the verbal description alone.
The article appears to have described the free monad + interpreter pattern, that is, each business-logic statement doesn't execute the action (as a verb), but instead constructs it as a noun and slots it into some kind of AST. Once you have an AST you can execute it with either a ProdAstVisitor or a TestAstVisitor which will carry out the commands for real.
More specific to your question, it sounds like the pattern matching you mentioned is choosing between Test.ReadFile and Test.WriteFile at each node of the AST (not between Test.ReadFile and Prod.ReadFile.)
I think the Haskell community turned away a little from free monad + interpreter when it was pointed out that the 'tagless final' approach does the same thing with less ceremory, by just using typeclasses.
> I’d have liked to see the use of dependency injection via the effects system expanded upon.
I'm currently doing DI via effects, and I found a technique I'm super happy with:
At the lowest level, I have a bunch of classes & functions which I call capabilities, e.g
These are tightly-focused on doing one thing, and must not know anything about the business. No code here would need to change if I changed jobs.
At the next level up I have classes & functions which can know about the business (and the lower level capabilities)
StoredCommands (fetchStoredCommands) - this uses the 'Restful' capability above to construct and send a payload to our business servers.
At the top of my stack I have a type called CliApp, which represents all the business logic things I can do, e.g.
I associate CliApp to all its actual implementations (low-level and mid-level) using type classes:
instance FileOps CliApp where
readTextFile = readTextFileImpl
writeTextFile = writeTextFileImpl
...
instance Logger CliApp where
info = infoImpl
warn = warnImpl
err = errImpl
...
instance StoredCommands CliApp where
fetchStoredCommands = fetchStoredCommandsImpl
...
In this way, CliApp doesn't have any of 'its own' implementations, it's just a set of bindings to the actual implementations.
I can create a CliTestApp which has a different set of bindings, e.g.
instance Logger CliTestApp where
info msg = -- maybe store message using in-memory list so I can assert on it?
Now here's where it gets interesting. Each function (all the way from top to bottom) has its effects explicitly in the type system. If you're unfamiliar with Haskell, a function either having IO or not (in its type sig) is a big deal. Non-IO essentially rules out non-determinism.
The low-level prod code (capabilites) are allowed to do IO, as signaled by the MonadIO in the type sig:
readTextFileImpl :: MonadIO m => FilePath -> m (Either String Text)
but the equivalent test double is not allowed to do IO, per:
readTextFileTest :: Monad m => FilePath -> m (Either String Text)
And where it gets crazy for me is: the high-level business logic (e.g. fetchStoredCommands) will be allowed to do IO if run via CliApp, but will not be allowed to do IO if run via CliTestApp, which for me is 'having my cake and eating it too'.
Another way of looking at it is, if I invent a new capability (e.g. Caching) and start calling it from my business logic, the CliTestApp pointing at that same business logic will compile-time error that it doesn't have its own Caching implementation. If I try to 'cheat' by wiring the CliTestApp to the prod Caching (which would make my test cases non-deterministic) I'll get another compile-time error.
Would it work in OCaml? Not sure, the article says:
> Currently, it should be noted that effect propagation is not tracked by the type system
rybosome 12 minutes ago [-]
Thanks for the detailed reply, that’s very cool! This looks great, very usable way to do DI.
Do you use Haskell professionally? If so, is this sort of DI style common?
zem 16 hours ago [-]
ocaml is one of my favourite languages too, but I've found myself being drawn towards rust for my latest project due to its major superpower - you can write a rust library that looks like a c library from the outside, and can be called from other languages via their existing c ffi mechanisms. I feel like by writing the library in ocaml I would have a better experience developing it, but be giving up on that free interop.
amelius 5 hours ago [-]
Do they have a decent GUI library / binding yet?
JaggerJo 4 hours ago [-]
OCaml does not have what you'd call I fully featured GUI framework to my knowledge (besides rendering webpages).
F# has FuncUI - based on Avalonia. All just possible because of the ecosystem.
I'm kind of surprised though because a lot of languages have a Qt binding. Implementing this is only a small investment for a relatively big gain.
shortrounddev2 19 hours ago [-]
OCaml is a great language without great tooling. Desperately needs a good LSP implementation to run breakpoints and other debugging tools on VSCode or other LSP-aware IDEs. I know there ARE tools available but there isn't great support for them and they don't work well
debugnik 18 hours ago [-]
LSP isn't the protocol that interfaces with debuggers, that'd be DAP. You're right that OCaml debugging is kinda clunky at the moment.
OCaml does have an okay LSP implementation though, and it's getting better; certainly more stable than F#'s in my experience, since that comparison is coming up a lot in this comment section.
StopDisinfo910 3 hours ago [-]
What’s clunky about the Ocaml debugger?
Ocaml has been shipping with an actual fully functional reverse debugger for ages.
Is the issue mostly integration with the debugging ui of VS Code?
Yes, I see what you mean now. You encountered a bug around system thread and dune didn’t properly pass your artifacts. That’s indeed annoying.
I deeply dislike dune myself and never use it. I just use the Ocaml toolchain like I would a good old C one which might explain our different experiences.
?? OCaml has had a completion engine for as long as I can remember (definitely over a decade) and it powers their LSP these days. I do know however that the community focuses mostly on Vim and Emacs.
vram22 14 hours ago [-]
Is OCaml somewhat suitable for desktop GUI app programming?
I saw this in the OP:
>For example, creating a binding with the Tk library
and had also been thinking about this separately a few days ago, hence the question.
loxs 18 hours ago [-]
I migrated from OCaml to Rust around 2020, haven't looked back. Although Rust is quite a lot less elegant and has some unpleasant deficiencies (lambdas, closures, currying)... and I end up having to close one one eye sometimes and clone some large data-structure to make my life easier... But regardless, its huge ecosystem and great tooling allows me to build things comparatively so easily, that OCaml has no chance. As a bonus, the end result is seriously faster - I know because I rewrote one of my projects and for some time I had feature parity between the OCaml and Rust versions.
Nevertheless, I have fond memories of OCaml and a great amount of respect for the language design. Haven't checked on it since, probably should. I hope part of the problems have been solved.
jasperry 16 hours ago [-]
Your comment makes me think the kind of people who favor OCaml over Rust wouldn't necessarily value a huge ecosystem or the most advanced tooling. They're the kind who value the elegance aspect above almost all else, and prefer to build things from the ground up, using no more than a handful of libraries and a very basic build procedure.
javcasas 15 hours ago [-]
Were you using the ocamlopt compiler? By default, ocaml runs in a VM, but few people figure that out because it is not screaming its name all the time like a pokemon (looking at you JVM/CLR). But ocaml can be compiled to machine code with significant performance improvements.
debugnik 14 hours ago [-]
> By default, ocaml runs in a VM,
The Dune build system does default to ocamlopt nowadays, although maybe not back around 2020.
ackfoobar 18 hours ago [-]
> the end result is seriously faster
Do you have a ballpark value of how much faster Rust is? Also I wonder if OxCaml will be roughly as fast with less effort.
16 hours ago [-]
sparkie 5 hours ago [-]
> At present, I don’t know anyone who has seriously used languages like OCaml or Haskell and was happy to return to languages with less sophisticated type systems (though an interesting project can sometimes justify such a technological regression).
Recovered typeaholic here. I still occasionally use OCaml and I primarily wrote F# and Haskell for years. I've been quite deep down the typing rabbit hole, and I used to scorn at dynamically typed languages.
Now I love dynamic typing - but not the Python kind - I prefer the Scheme kind - latent typing. More specifically, the Kernel[1] kind, which is incredibly powerful.
> I think the negative reputation of static type checking usually stems from a bad experience.
I think this goes two ways. Most people's experience with dynamic typing is the Python kind, and not the Kernel kind.
To be clear, I am not against static typing, and I love OCaml - but there are clear cases where static typing is the wrong tool - or rather, no static typing system is sufficient to express problems that are trivial to write correctly with the right dynamic types.
Moreover, some problems are inherently dynamic. Take for example object-capabilities (aka, security done right). Capabilities can be revoked at any time. It makes no sense to try and encode capabilities into a static type system - but I had such silly thoughts when I was a typeaholic, and I regularly see people making the same mistake. Wouldn't it be better to have a type system which can express things which are dynamic by nature?
And this is my issue with purely statically typed systems: They erase the types! I don't want to erase the types - I want the types to be available at runtime so that I can do things with them that I couldn't do at compile time - without me having to write a whole new interpreter.
My preference is for Gradual Typing[2], which lets us use both worlds. Gradual typing is static typing with a `dynamic` type in the type system, and sensible rules for converting between dynamic and static types - no transitivity in consistency.
People often mistake gradual typing with "optional typing" - the kind that Erlang, Python and Typescript have - but that's not correct. Those are dynamic first, with some static support. Gradual typing is static-first, with dynamic support.
Haskell could be seen as Gradual due to the presence of `Data.Dynamic`, but Haskell's type system, while a good static type system, doesn't make a very good dynamic type system.
Aside, my primary language now is C, which was the first language I learned ~25 years ago. I regressed! I came back to C because I was implementing a gradually typed language and F#/OCaml/Haskell were simply too slow to make it practical, C++/Rust were too opinionated and incompatible with what I want to achieve, and C (GNU dialect) let me have almost complete control over the CPU, which I need to make my own language good enough for practical use. After writing C for a while I learned to love it again. Manually micro-optimizing with inline assembly and SIMD and is fun!
> Now I love dynamic typing - but not the Python kind - I prefer the Scheme kind - latent typing.
Could you elaborate on the difference? I was under the impression that "latent typing" just means "values, not variables, have types", which would make Python (without type annotations) latently typed as well.
pshirshov 14 hours ago [-]
Extremely dated. No HKTs, no typeclasses (modules are not a good substitute), no call-site expansion.
nukifw 14 hours ago [-]
- Extremely dated: we have almost one new release every six month and in recent releases, the language runtime has been changed and user-defined effects have been introduced.
True, but then concurrency via algebraic effects makes it look more modern than Rust.
13 hours ago [-]
FrustratedMonky 18 hours ago [-]
In F# comparison. Modules "my opinion, strongly justify preferring one over the other".
Strong stance on Modules. My ignorance, what do they do that provides that much benefit. ??
debugnik 17 hours ago [-]
In short, OCaml modules are used for coarse-grained generics.
Modules are like structurally-typed records that can contain both abstract types and values/functions dependent on those types; every implementation file is itself a module. When passed to functors (module-level functions), they allow you to parameterize large pieces of code, depending on multiple types and functions, all at once quite cleanly. And simply including them or narrowing their signatures is how one exports library APIs.
(The closest equivalent I can imagine to module signatures is Scala traits with abstract type members, but structurally-typed and every package is an instance.)
However, they are a bit too verbose for finer-grained generics. For example, a map with string keys needs `module String_map = Map.Make(String)`. There is limited support for passing modules as first-class values with less ceremony, hopefully with more on the way.
akkad33 17 hours ago [-]
I don't know OCAML well but I think this is referring to the fact that modules on OCAML can be generic. In f# there is no HKTs that is types that can be parameterised with type classes. So in F# you have to have List.map, option.map etc, whereas in a language like OCAML or Haskell they would have one parametrised module
moi2388 19 hours ago [-]
If I wanted to program in OCaml, id program in F# instead
nukifw 19 hours ago [-]
Hi! Thank you for your interest (and for potentially reading this).
Yes, the trick is expanded here: https://libres.uncg.edu/ir/asu/f/Johann_Patricia_2008_Founda... (if you have `Eq a b = Refl : a a eq` you should be able to encode every useful GADTs. But having a compiler support is nice for specifics reason like being able to "try" to detect unreachable cases in match branches for examples.
debugnik 18 hours ago [-]
I've used equality witnesses in F# before, they kinda work but can't match proper GADTs. First you'll need identity conversion methods on the witness, because patterns can't introduce type equalities, then you'll realise you can't refute unreachable branches for the same reason, so you still need to use exceptions.
moi2388 17 hours ago [-]
Thanks, that was an interesting read for sure!
jimbob45 18 hours ago [-]
It's always weird when Microsoft pulls an "embrace, extend, extinguish" but the "extinguish" part happens without their involvement or desire and then we're all stuck wondering how Microsoft got left holding the bag.
[1]: Practically speaking, the 31-bit Ints are annoying if you're trying to do any bit bashing, but aesthetically the double semicolons are an abomination and irk me far more.
OCaml had its act together. It was significantly nicer than Python when I used it professionally in 2010. Just look at what JaneStreet achieved with it.
The main impediment to OCaml was always that it was not American nor mainly developed from the US.
People like to believe there is some technical merit to language popularity but the reality it’s all fashion based. Rust is popular because they did a ton of outreach. They used to pay someone full time to mostly toot their horn.
No way OCaml could have stolen the Rust's thunder: we have a number of very decent and performant GC-based languages, from Go to Haskell; we only had one bare-metal-worthy expressive language in 2010, C++, and it was pretty terrible (still is, but before C++11 and C++17 it was even more terrible).
People agree to go to great lengths to use a tool that has some kind of superpower, despite syntactic weirdness or tooling deficiencies. People study and use LISP descendants like Clojure, APL descendants like K, "academic" languages like Haskell and OCaml, they write hobby projects in niche languages like Nim or Odin, they even use even C++ templates.
Why is Ada so under-represented? It must have a very mature ecosystem. I suspect that it's just closed-source mostly, and the parties involved don't see much value in opening up. If so, Ada is never going to make it big, and will slowly retreat under the pressure of better-known open alternatives, even in entrenched areas like aerospace.
https://www.adacore.com/
https://www.ghs.com/products/ada_optimizing_compilers.html
https://www.ptc.com/en/products/developer-tools/apexada
https://www.ddci.com/products_score/
http://www.irvine.com/tech.html
http://www.ocsystems.com/w/index.php/OCS:PowerAda
http://www.rrsoftware.com/html/prodinf/janus95/j-ada95.htm
Ada was too hardware demanding for the kind of computers people could afford at home, we could already do our Ada-like programming with Modula-2 and Object Pascal dialect hence how Ada lost home computing, and FreePascal/Delphi would be much more used today, had it not been for Borland getting too gready.
On big iron systems, espcially among UNIX vendors they always wanted extra bucks for Ada.
When Sun started the trend of UNIX vendors to charge for the developer tools as an additional SKU, Ada wasn't part of the package, rather an additional license on top, so when you already pay for C and C++ compilers, why would someone pay a few thousand (select currency) more if not required to do so, only because of feeling good writing safer software, back in the days no one cared about development cost of fixing security bugs.
I have a pet theory that it shares the same thing as any heavily typed language; it's difficult. And people aren't willing to accept that when you get it to compile at all, it'll probably work fine.
So many developers (and many more middle management) are not willing to trade the longer term stability/lack of runtime errors for the quick-to-production ability of other languages.
Kotlin is definitely available at Google, but when talking about sym types et al it's not nearly as nice to use as Rust / OCaml.
I definitely find it (and jetpack compose) make developing android apps a much better experience than it used to be.
What I like a lot about Kotlin are its well written documentation and the trailing lambdas feature. That is definitely directly OCaml inspired (though I also recently saw it in a newer language, the "use" feature in Gleam). But in Kotlin it looks nicer imo. Allows declarative code to look pretty much like json which makes it more beginner friendly than the use syntax.
But Kotlin doesn't really significantly stand out among Java, C#, Swift, Go, etc. And so it is kind of doomed to be a somewhat domain specific language imo.
Kotlin has a very similar syntax to Groovy, which already had that feature (it looks identical in Groovy and Kotlin)... and I believe Groovy itself took that from Ruby, I believe (Groovy tried to add most convenient features from Python and Ruby). Perhaps that is what came from OCaml?? No idea, but I'd say the chance Kotlin copied Groovy is much higher as JB was using Java and Groovy before Kotlin existed.
If that wasn't the case, Google would support Java latest with all features, alongside Kotlin, and let the best win.
See how much market update Kotlin has outside Android, when it isn't being pushed and needs to compete against Java vLatest on equal terms.
Android is Google's J++, which Sun sued and won.
Kotlin is Google's C#.
Plus everyone keeps forgetting Kotlin is a JVM based language, Android Studio and Gradle are implemented in JVM languages, JVM are implemented in a mix of C, C++ and Java (zero Kotlin), Android still uses Java, only that Google only takes out of OpenJDK what they feel like, and currentl that is Java 17 LTS, most of the work on OpenJDK was done by Oracle employees.
I think it will be very hard for us to find anything in common to agree on then.
Anyway, it’s pretty clear Google is pushing Kotlin because they don’t want to have anything to do with Oracle which has not been cleared by the verdict of their last trial. The situation has nothing to do with anything technical.
Blaming them for pushing Kotlin when the alternative you offer is them using a language they have already been sued for their use of seems extremely misguided to me.
I call them dishonest by comparing outdated Java 7 subset with Kotlin, when back in 2017 the latest version was Java 9, and in 2025 it is Java 24, and yet the documentation keeps using Java 8 for most examples on Java versus Kotlin.
How come Google doesn't want to have anything with Oracle, when it is impossible to build an Android distribution without a JVM, again people like yourself keep forgeting OpenJDK is mostly a product from Oracle employees (about 80%) with remaing efforts distributed across Red-Hat(IBM), IBM, Azul, Microsoft and JetBrains (I wonder what those do on Android), Kotlin doesn't build for Android without a JVM implementation, Gradle requires a JVM implementation, Android Studio requires a JVM implementation, Maven Central has JVM libraries,....
If Google doesn't want anything to do with Oracle why aren't they using Dart, created by themselves, instead of a language that is fully dependent on Oracle's kigdom for its own very existence?
They clearly don’t want to add anything which couldn’t be reasonably covered by the result of the previous trial.
The list you give was all already there then. Moving to a more recent version of Java wouldn’t be.
> OpenJDK is mostly a product from Oracle employees (about 80%)
Sun employees, not Oracle employees. Using Sun technology was fine, using Oracle technology is something else entirely.
Can start here, https://dev.java/contribute/openjdk/
"Once you have contributed several changes (usually two) you can become an Author. An author has the right to create patches but cannot push them. To push a patch, you need a Sponsor. Gaining a sponsorship is usually achieved through the discussions you had on the mailing lists.
In order to become an Author, you also need to sign the Oracle Contribution Agreement (OCA)."
The go into https://openjdk.org/bylaws
"The OpenJDK Lead is an OpenJDK Member, appointed by Oracle, who directs the major efforts of the Community, which are new implementations of the Java SE Platform known as JDK Release Projects."
And this nice contribution overview from Java 22,
https://blogs.oracle.com/java/post/the-arrival-of-java-22
"Of the 26,447 JIRA issues marked as fixed in Java 11 through Java 22 at the time of their GA, 18,842 were completed by Oracle employees while 7,605 were contributed by individual developers and developers working for other organizations. Going through the issues and collating the organization data from assignees results in the following chart of organizations sponsoring the development of contributions in Java:"
To spare you the math, 77% were done by Oracle employees.
Now please show us how Kotlin compiles for Android without using Java.
Doesn't look like Google got rid of Oracle to me, more like they didn't even considered Dart, nor Go could stand a chance against the Java ecosystem among Android developers.
Your link doesn’t change any of that nor your clearly condescending comment before. You are perfectly aware of the fact by the way and you know exactly what I meant so I don’t really understand the game you are playing.
Oracle can claim Sun contribution as their own as much as they want. It doesn’t change the fact that you would have to be insane to touch anything they do now that it’s Oracle property.
I am playing the FACTS game.
In many use cases even if the performance is within the project delivery deadlines there will worthless discussions about performance benchmarks completly irrelevant to the task at hand.
And ironically many of the same folks are using Electron based apps for their workflows.
* OPAM is quite buggy and extremely confusing.
* Windows support is very bad. If you ever tried to use Perl on Windows back in the day... it's worse than that.
* Documentation is terse to the point of uselessness.
* The syntax style is quite hard to mentally parse and also not very recoverable. If you miss some word or character the error can be "the second half of the file has a syntax error". Not very fun. Rust's more traditional syntax is much easier to deal with.
Rust basically has none of those issues. Really the only advantage I can see with OCaml today is compile time, which is important, but it's definitely not important enough to make me want to use OCaml.
The only contact with OCaml I had was that I wrote a bug report to a university professor because I wanted his tool to process one of my files, but the file was larger than OCaml's int type could handle. That itself wasn't the problem - he wrote it wasn't straight forward to fix it. (This is a bug of the type "couldn't have happened in Common LISP". But I guess even in C one could replace int by FILE_SIYE_TYPE and #define it as unsigned size_t, for instance).
The great thing is we have choice. We have a huge number of ways to express ideas and ... do them!
I might draw a parallel with the number of spoken languages extent in the UK (only ~65M people). You are probably familiar with English. There are rather a lot more languages here. Irish, Scottish, Welsh - these are the thriving Brythonic languages (and they probably have some sub-types). Cornish formally died out in the sixties (the last two sisters that spoke it natively, passed away) but it has been revived by some locals and given that living people who could communicate with relos with first hand experience, I think we can count that a language that is largely saved. Cumbric ... counting still used by shepherds - something like: yan, tan, tithera toe.
I am looking at OCAML because I'm the next generation to worry about genealogy in my family and my uncle has picked Geneweb to store the data, taking over from TMG - a Windows app. His database contains roughly 140,000 individuals. Geneweb is programmed in OCAML.
If you think that programming languages are complicated ... have a go at genealogy. You will soon discover something called GEDCOM and then you will weep!
Choice is good of course so do keep up the good work.
I am no strage to ML type systems, my first one was Caml Light, OCaml was still known as Objective Caml, and Mirada was still something being discussed on programming language lectures on my university.
From what I see, I also kind of find the same, too many people rush out for Rust thinking that ML type systems is something new introduced by Rust, without having the background where all comes from.
I think they have been optional for like 20 years, except in the top-level interactive environment to force execution.
That being said, I still don't get why people are so much upset with the syntax. You'll integrate it after a week writing OCaml code.
And I think a big part of the reason that Elixir has done so well (Elixir pretty much starting out as Erlang-but-with-Ruby-syntax)
Arguably, that could have been Scala and for a while it seemed like it would be Scala but then it kind of just... didn't.
I suspect some of that was that the programming style of some high profile Scala packages really alienated people by pushing the type system and operator overloading much farther than necessary.
It tries to be a better Java and a better OCaml at the same time. This split personality led to Scala’s many dialects, which made it notorious for being difficult to read and reason about, particularly as a mainstream language contender.
Above all, Scala is considered a functional language with imperative OOP qualities. And it more or less fits that description. But like it or not primarily functional languages don’t have a strong reputation for building large maintainable enterprise software.
That’s the quiet part no one says out loud.
It’s like how in academic circles Lisp is considered the most pure and most powerful of programming languages, which may be true. At the same time most real-world decision makers see it as unsuitable as a mainstream language. If it were otherwise, we’d have seen a Lisp contend with imperative langs such as Java, C#, TypeScript, etc.
I’ve always attributed this disconnect to the fact that people naturally model the world around them as objects with state — people don’t think functionally.
Just Arc, clone and panic your way to success! :)
Many academically-trained developers never got exposed to FP in school, and to this day you can still hear, albeit in much lesser numbers thanks to the popularity of Elixir/Clojure/etc., the meme of FP being "hard" perpetuated.
---
1: I would go so far as to blame Haskell for the misplaced belief that FP means overcomplicated type theory when all you want is a for loop and a mutable data-structure.
2: I played with OCaml 10+ years ago, and couldn't make head or tails of it. I tried again recently, and it just felt familiar and quite obvious.
> I'm certain the meme was mostly born out of unfamiliarity
> I would go so far as to blame Haskell for the misplaced belief that FP means overcomplicated type theory
Maybe I got lucky being in one of the most relevant universities in Portugal, however I can tell that others in the country strive for similar quality for their graduates, even almost 40 years later.
Haskell of course has some of this, but immutability means that Haskell doesn't have to have answers for lots of things. And you want pattern matching as your basic building block, but at the end of the day most of your code won't have pattern matching and will instead rely on higher level patterns (that can build off of ADTs providing some degree of certainty on totality etc)
When I see Rust topping the "most loved language" on Stack Overflow etc. what I think is really happening is that people are using a "modern" language for the first time. I consistently see people gushing about, e.g., pattern matching in Rust. I agree pattern matching is awesome, but it is also not at all novel if you are a PL nerd. It's just that most commercial languages are crap from a PL nerd point of view.
So I think "no gc but memory safe" is what got people to look at Rust, but it's 1990s ML (ADTs, pattern matching, etc.) that keeps them there.
[1]: https://github.com/oxcaml/oxcaml
[2]: https://docs.scala-lang.org/scala3/reference/experimental/cc...
Yeah; this is my experience. I've been working in C professionally lately after writing rust fulltime for a few years. I don't really miss the borrow checker. But I really miss ADTs (eg Result<>, Option, etc), generic containers (Vec<T>), tuples, match expressions and the tooling (Cargo).
You can work around a lot of these problems in C with grit and frustration, but rust just gives you good answers out of the box.
I think it was more about "the performance of C, but with memory safety and data race safety".
A lot of effort went into making it efficient thanks to the web, while python sorta has its hands tied back due to exposing internals that can be readily used from C.
OCaml's GC design is a pretty simple one: two heaps, one for short-lived objects and another one for the long-lived objects, and it is a generational and mostly non-moving design. Another thing that helps tremendously is the fact that OCaml is a functional programming language[1], which means that, since values are never mutated, most GC objects are short or very short-lived and never hit the other heap reserved for the long-lived objects, and the short-lived objects perish often and do so quickly.
So, to recap, OCaml’s GC is tuned for simplicity, predictable short pauses, and easy interoperability, whereas Java’s GC is tuned for maximum throughput and large-scale heap management with sophisticated concurrency and compaction.
[0] Maybe it still is – I am not sure.
[1] It is actually a multiparadigm design, although most code written in OCaml is functional in its style.
That surely has a performance cost.
I don't think it was on-device code, as they talked about porting Python projects. But you can watch the talk to see if I'm misremembering.
No, that wouldn't have made the difference. No-one didn't pick up OCaml because it didn't have multicore or they were annoyed by the semicolons.
People don't switch languages because the new language is "old language but better". They switch languages because a) new language does some unique thing that the old language didn't do, even if the unique thing is useless and irrelevant, b) new language has a bigger number than old language on benchmarks, even if the benchmark is irrelevant to your use case, or c) new language claims to have great interop with old language, even if this is a lie.
There is no way OCaml could have succeeded in the pop culture that is programming language popularity. Yes, all of the actual reasons to use Rust apply just as much to OCaml and if our industry operated on technical merit we would have migrated to OCaml decades ago. But it doesn't so we didn't.
People wouldn't care much for Rust at all if it didn't offer two things that are an absolute killer feature (that OCaml does not have):
* no GC, while being memory safe.
* high performance on par with C while offering no-cost high level conveniences.
There's no other language to this day that offers anything like that. Rust really is unique in this area, as far as I know.
The fact that it also has a very good package manager and was initially backed by a big, trusted company, Mozzila, while OCaml comes from a Research Lab, also makes this particular race a no-brainer unless you're into Functional Programming (which has never been very popular, no matter the language).
(They are the same language)
Reason at least was an active collaboration between several projects in the OCaml space with some feedback into OCaml proper (even though there was a lot of initial resistance IIRC).
Whereas the Go and Rust communities, for example, were just fine with having corporate sponsorship driving things.
They really don't, less than 5% of opam packages depend on Base and that's their bottom controversial dependency I'd say. Barely anyone's complaining about their work on the OCaml platform or less opinionated libraries. I admit the feeling that they do lingers, but having used OCaml in anger for a few years I think it's a non-issue.
What they do seem to control is the learning pipeline, as a newcomer you find yourself somewhat funneled to Base, Core, etc. I tried them for a while, but eventually understood I don't really need them.
But going way back while yeah the team at Google controlled the direction, there were some pretty significant contributions from outside to channels, garbage collection, and goroutines and scheduling..
That's about the time-frame where I got into OCaml so I followed this up close.
The biggest hindrance in my opinion is/was boxed types.
Too much overhead for low level stuff, although there was a guy from oxbridge doing GL stuff with it.
Rust might gain some adoption among people who are new to high performance software and see a narrative that its the only game in town, but it turns off a lot of older folks like myself who know it isn't and that community matters.
gl to you and people like you trying to get it back on track!
Rust is an exciting language. It comes up because many people like it.
The most involved project I did with it was a CRUD app for organising Writer's Festivals.
The app was 100% OCaml (ReasonML so I could get JSX) + Dream + HTMX + DataTables. I used modules to get reusable front end templates. I loved being able to make a change to one of my data models and have the compiler tell me almost instantly where the change broke the front end. The main value of the app was getting data out of excel into a structured database, but I was also able to provide templated and branded itineraries in .odt format, create in memory zipped downloads so that I didn't need to touch the server disk. I was really impressed by how much I could achieve with the ecosystem.
But having to write all my database queries in strings and then marshal the data through types was tiring (and effectively not compile time type checked) and I had to roll my own auth. I often felt like I was having to work on things that were not core to the product I was trying to build.
I've spent a few years bouncing around different languages and I think my take away is that there is no perfect language. They all suck in their own special way.
Now I'm building an app just for me and I'm using Rails. Pretty much everything I've wanted to reach for has a good default answer. I really feel like I'm focused on what is relevant to the product I'm building and I'm thinking about things unrelated to language like design layout and actually shipping the thing.
One thing I am wondering about in the age of LLMs is if we should all take a harder look at functional languages again. My thought is that if FP languages like OCaml / Haskell / etc. let us compress a lot of information into a small amount of text, then that's better for the context window.
Possibly we might be able to put much denser programs into the model and one-shot larger changes than is achievable in languages like Java / C# / Ruby / etc?
Aside from the obvious problem that there's not enough FP in the training corpus, it seems like terser languages don't work all that well with LLMs.
My guess is that verbosity actually helps the generation self-correct... if it predicts some "bad" tokens it can pivot more easily and still produce working code.
I’d believe that, but I haven’t tried enough yet. It seems to be doing quite well with jq. I wonder how its APL fares.
When Claude generates Haskell code, I constantly want to reduce it. Doing that is a very mechanical process; I wonder if giving an agent a linter would give better results than overloading it all to the LLM.
The power of Haskell in this case is the fearless refactoring the strong type system enables. So even if the code generated is not beautiful, it can sit there and do a job until the surrounding parts have taken shape, and then be refactored into something nice when I have a moment to spare.
The expressive type system catches a lot of mistakes, and the fact that they are compile errors which can be fed right into the LLM again means that incorrect code is caught early.
The second is property based testing. With it I have had the LLM generate amazingly efficient, correct code, by iteratively making it more and more efficient – running quickcheck on each pass. The LLM is not super good at writing the tests, but if you add some yourself, you quickly root out any mistakes in the generated code.
This might not be impossible to achieve in other languages, but I haven’t seen it used as prevailently in other languages.
And then I didn't even make this "experiment" with Java or another managed, more imperative language which could have shed some weight due to not caring about manual memory management.
So not sure how much truth is in there - I think it differs based on the given program: some lend itself better for an imperative style, others prefer a more functional one.
For instance, dependent types allow us to say something like "this function will return a sorted list", or even "this function will return a valid Sudoku solution", and these things will be checked at compile time--again, at compile time.
Combine this with an effect system and we can suddenly say things like "this function will return a valid Sudoku solution, and it will not access the network or filesystem", and then you let the LLM run wild. You don't even have to review the LLM output, if it produces code that compiles, you know it works, and you know it doesn't access the network or filesystem.
Of course, if LLMs get a lot better, they can probably just do all this in Python just as well, but if they only get a little better, then we might want to build better deterministic systems around the unreliable LLMs to make them reliable.
Rather, immutability/purity is a huge advantage because it plays better with the small context window of LLM's. An LLM then doesn't have to worry about side effects or mutable references to data outside the scope currently being considered.
My experience in the past with something like cats-effect has been that there are straightforward things that aren't obvious, and if you haven't been using it recently, and maybe even if you've been using it but haven't solved a similar problem recently, you can get stuck trawling through the docs squinting at type signatures looking for what turns out to be, in hindsight, an elegant and simple solution. LLMs have vastly reduced this kind of friction. I just ask, "In cats-effect, how do I...?" and 80% of the time the answer gets me immediately unstuck. The other 20% of the time I provide clarifying context or ask a different LLM.
I haven't done enough maintenance coding yet to know if this will radically shift my view of the cost/benefit of functional programming with effects, but I'm very excited. Writing cats-effect code has always been satisfying and frustrating in equal measure, and so far, I'm getting the confidence and correctness with a fraction of the frustration.
I haven't unleashed Claude Code on any cats-effect code yet. I'm curious to see how well it will do.
Claude Code’s Haskell style is very verbose; if-then-elsey, lots of nested case-ofs, do-blocks at multiple levels of intension, very little naming things at top-level.
Given a sample of a simple API client, and a request to do the same but for another API, it did very well.
I concluded that I just have more opinions about Haskell than Java or Rust. If it doesn’t look nice, why even bother with Haskell.
I reckon that you could seed it with style examples that take up very little context space. Also, remind it to not enable language pragmas per file when they’re already in .cabal, and similar.
Granted, this may just be an argument for being more comfortable reading/writing code in a particular style, but even without the advantages of LLMs adoption of functional paradigms and tools has been a struggle.
There are ppx things that can automatically derive "to string" functions, but it's a bit of effort to set up, it's not as nice to use as what's available in Rust, and it can't handle things like Set and Map types without extra work, e.g. [1] (from 2021 so situation may have changed).
Compare to golang, where you can just use "%v" and related format strings to print nearly anything with zero effort.
[1] https://discuss.ocaml.org/t/ppx-deriving-implementation-for-...
Python does it best from what I've seen so far, with its __repr__ method.
I'm know .NET in and out, so I might be biased. Most of the boring parts have multiple good solutions that I can pick from. I don't have to spend time on things that are not essential to the problem I actually want to solve.
I've used F# professionally for multiple years and maintain a quite popular UI library written in it. But even with .NET there still are gaps because of the smaller F# language ecosystem. Not everything "just works" between CLR languages - sometimes it's a bit more complicated.
The main point I'm trying to make is that going off the beaten path (C#) for example also comes with a cost. That cost might or might not be offset by the more expressive language. It's important to know this so you are not surprised by it.
With OCaml it's similar I'd say. You get a really powerful language, but you're off the beaten path. Sure, there are a few companies using it in production - but their use case might be different than yours. On Jane Streets Threads and Signals podcast they often talk about their really specific use cases.
1: https://blog.darklang.com/new-backend-fsharp/
I’ve always been curious about OCaml, especially since some people call it “Go with types” and I’m not a fan of writing Rust. But I’m still not sold on OCaml as a whole, its evangelists just don’t win me over the way the Erlang, Ruby, Rust, or Zig folks do. I just cant see the vision
But OCaml sadly can't replace F# for all my use cases. F# does get access to many performance-oriented features that the CLR supports and OCaml simply can't, such as value-types. Maybe OxCaml can fix that long term, but I'm currently missing a performant ML-like with a simple toolchain.
And the best way I can describe why is that my code generally ends up with a few heavy functions that do too much; I can fix it once I notice it, but that's the direction my code tends to go in.
In my OCaml code, I would look for the big function and... just not find it. No single workhorse that does a lot - for some reason it was just easier for me to write good code.
Now I do Rust for side projects because I like the type system - but I would prefer OCaml.
I keep meaning to checkout F# though for all of these reasons.
The type A → (B → C) is isomorphic to (A × B) → C (via currying). This is analogous to the rule (cᵇ)ᵃ = cᵇ˙ᵃ.
The type (A + B) → C is isomorphic to (A → C) × (B → C) (a function with a case expression can be replaced with a pair of functions). This is analogous to the rule cᵃ⁺ᵇ = cᵃ·cᵇ.
A sum type has as many possible values as the sum of its cases. E.g. `A of bool | B of bool` has 2+2=4 values. Similarly for product types and exponential types. E.g. the type bool -> bool has 2^2=4 values (id, not, const true, const false) if you don't think about side effects.
Not the best example since 2*2=4 also.
How about this bit of Haskell:
That's 3 ^ 2 = 9, right? Those are 6. What would be the other 3? or should it actually be a*b=6?EDIT: Nevermind, I counted wrong. Here are the 9:
EDIT: now you see why I used the smallest type possible to make my point. Exponentials get big FAST (duh).
f1 False = Nothing, f1 True = Nothing
f2 False = Nothing, f2 True = Just True
...
This gives the correct 3^2 = 9 functions.
Here is my uneducated guess:
In math, after sum and product, comes exponent :)
So they may have used that third term in an analogous manner in the example.
This has the benefit of giving you the ability to refer to a case as its own type.
> the expression of sums verbose and, in my view, harder to reason about.
You declare the sum type once, and use it many times. Slightly more verbose sum type declaration is worth it when it makes using the cases cleaner.
And since OCaml has also an object model, you can also encoding sum and sealing using modules (and private type abreviation).
A meta point: it seems to me that a lot of commenters in my thread don't know that vanilla HM cannot express subtypes. This allows the type system to "run backwards" and you have full type inference without any type annotations. One can call it a good tradeoff but it IS a tradeoff.
I think we agree on a lot of points. The rest is mostly preferences. Some other comments in my thread though...
A case of a sum-type is an expression (of the variety so-called a type constructor), of course it has a type.
A case itself isn't a type, though it has a type. Thanks to pattern matching, you're already unwrapping the parameter to the type-constructor when handling the case of a sum-type. It's all about declaration locality. (real * real) doesn't depend on the existence of shape.The moment you start ripping cases as distinct types out of the sum-type, you create the ability to side-step exhaustiveness and sum-types become useless in making invalid program states unrepresentable. They're also no longer sum-types. If you have a sum-type of nominally distinct types, the sum-type is contingent on the existence of those types. In a class hierarchy, this relationship is bizarrely reversed and there are knock-on effects to that.
> You declare the sum type once, and use it many times.
And you typically write many sum-types. They're disposable. And more to the point, you also have to read the code you write. The cost of verbosity here is underestimated.
> Slightly more verbose sum type declaration is worth it when it makes using the cases cleaner.
C#/Java don't actually have sum-types. It's an incompatible formalism with their type systems.
Anyways, let's look at these examples:
C#:
ML: They're pretty much the same outside of C#'s OOP quirkiness getting in it's own way.Quite the opposite, that gives me the ability to explicitly express what kinds of values I might return. With your shape example, you cannot express in the type system "this function won't return a point". But with sum type as sealed inheritance hierarchy I can.
> C#/Java don't actually have sum-types.
> They're pretty much the same
Not sure about C#, but in Java if you write `sealed` correctly you won't need the catch-all throw.
If they're not actual sum types but are pretty much the same, what good does the "actually" do?
Will the compiler check that you have handled all the cases still? (Genuinely unsure — not a Java programmer)
https://openjdk.org/jeps/409#Sealed-classes-and-pattern-matc...
> with pattern matching for switch (JEP 406)the compiler can confirm that every permitted subclass of Shape is covered, so no default clause or other total pattern is needed. The compiler will, moreover, issue an error message if any of the three cases is missing
Sure you can, that's just subtyping. If it returns a value that's not a point, the domain has changed from the shape type and you should probably indicate that.
This is doing things quick and dirty. For this trivial example it's fine, and I think a good example of why making sum-types low friction is a good idea. It completely changes how you solve problems when they're fire and forget like this.That's not to say it's the only way to solve this problem, though. And for heavy-duty problems, you typically write something like this using higher-kinded polymorphism:
This is extremely overkill for the example, but it also demonstrates a power you're not getting out of C# or Java without usage of reflection. This is closer to the system of inheritance, but it's a bit better designed. The added benefit here over reflection is that the same principle of "invalid program states are unrepresentable" applies here as well, because it's the exact same system being used. You'll also note that even though it's a fair bit closer conceptually to classes, the sum-type is still distinct.Anyways, in both cases, this is now just:
Haskell has actual GADTs and proper higher kinded polymorphism, and a few other features where this all looks very different and much terser. Newer languages bake subtyping into the grammar.> If they're not actual sum types but are pretty much the same, what good does the "actually" do?
Conflation of two different things here. The examples given are syntactically similar, and they're both treating the constituent part of the grammar as a tagged union. The case isn't any cleaner was the point.
However in the broader comparison between class hierarchies and sum-types? They're not similar at all. Classes can do some of the things that sum-types can do, but they're fundamentally different and encourage a completely different approach to problem-solving, conceptualization and project structure... in all but the most rudimentary examples. As I said, my 2nd example here is far closer to a class-hierarchy system than sum-types, though it's still very different. And again, underlining that because of the properties of sum-types, thanks to their specific formalization, they're capable of things class hierarchies aren't. Namely, enforcing valid program-states at a type-level. Somebody more familiar with object-oriented formalizations may be a better person to ask than me on why that is the case.
It's a pretty complicated space to talk about, because these type systems deviate on a very basic and fundamental level. Shit just doesn't translate well, and it's easy to find false friends. Like how the Japanese word for "name" sounds like the English word, despite not being a loan word.
Anyway, to translate your example:
A `Rectangle` is both a `Bound` (weird name choice but whatever), and a `Shape`. Thanks to subtyping, no contortion needed. No need to use 7 more lines to create a separate, unrelated type.> the Japanese word for "name" sounds like the English word, despite not being a loan word.
Great analogy, except for the fact that someone from the Java team explicitly said they're drawing inspirations from ML.
https://news.ycombinator.com/item?id=24203363
Substantiate this.
> weird name choice but whatever
I don't think this kind of snarky potshot is in line with the commentary guidelines. Perhaps you could benefit from a refresher?
https://news.ycombinator.com/newsguidelines.html#comments
> Thanks to subtyping, no contortion needed
I see the same degree of contortion, actually. Far more noisy, at that.
> No need to use 7 more lines to create a separate, unrelated type.
You're still creating a type, because you understand that a sum-type with a different set of cases is fundamentally a different type. Just like a class with a different set of inheritance is a different type. And while it's very cute to compress it all into a single line, it's really not compelling in the context of readability and "write once, use many". Which is the point you were making, although it was on an entirely different part of the grammar.
> Great analogy, except for the fact that someone from the Java team explicitly said they're drawing inspirations from ML.
ML didn't invent ADTs, and I think you know it's more than disingenuous to imply the quotation means that the type-system in Java which hasn't undergone any fundamental changes in the history of the language (nor could it without drastically changing the grammar of the language and breaking the close relationship to the JVM) was lifted from ML.
You never gave an example how sum types in Java/Kotlin cannot do what "real" sum types can.
>> weird name choice but whatever
> snarky potshot
Sorry that you read snark. What I meant was "I find naming this 'Bound' weird. But since I am translating your example, I'll reuse it".
> You're still creating an unrelated type
How can a type participating in the inheritance hierarchy be "unrelated"?
> I see the same degree of contortion, actually. Far more noisy, at that.
At this point I can only hope you're a Haskeller and do not represent an average OCaml programmer.
Operationally these systems and philosophies are quite different, but mathematically we are all working in more work less an equivalent category and all the type system shenanigans you have in FP are possible in OOP modulo explicit limits placed on the language and vice versa.
Me neither.
> you are entirely correct that sealed types can fully model sum types
I want to be wrong, in that case I learn something new.
Correct. This is not the case when you talk about Java/Kotlin. Just ugliness and typical boilerplate heavy approach of JVM languages.
I have provided a case how using inheritance to express sum types can help in the use site. You attacked without substantiating your claim.
> This has the benefit of giving you the ability to refer to a case as its own type.
means.
I can tell.
Thankfully the OCaml textbook has this explicitly called out.
https://dev.realworldocaml.org/variants.html#combining-recor...
> The main downside is the obvious one, which is that an inline record can’t be treated as its own free-standing object. And, as you can see below, OCaml will reject code that tries to do so.
- create a separate record type, which is no less verbose than Java's approach
- use positional destructuring, which is bug prone for business logic.
Also it's funny that you think OCaml records are "with better syntax". It's a weak part of the language creating ambiguity. People work around this qurik by wrapping every record type in its own module.
https://dev.realworldocaml.org/records.html#reusing-field-na...
ReasonML has custom operators that allows for manipulating monads somewhat sanely (>>= operators and whatnot). rescript (reasonml’s “fork”) did not last time I checked. But it does have an async/await syntax which helps a lot with async code. reasonml did not last time I checked, so you had to use raw promises.
I believe Melange (which the article briefly talks about) supports let bindings with the reason syntax.
And this kinda changes everything if you React. Because you can now have sane JSX with let bindings. Which you could not until melange. Indeed, you can PPX your way out of it in ocaml syntax, but I’m not sure the syntax highlight works well in code editors. It did not on mine anyway last time I checked.
So for frontend coding, Melange’s reason ml is great as you have both, and let bindings can approximate quite well async syntax on top of writing readable monadic code.
For backend code, as a pythonista, I hate curlies. and I do like parenthesis-less function calls and definitions a lot. But I still have a lot of trouble, as a beginner ocamler, with non-variable function argument as I need to do “weird” parenthesis stuff.
Hope this “helps”!
Wanting to use a functional language I pivoted to fsharp, which was not the expected choice for me as I use Linux exclusively. I have been happy with this choice, it has even become my preferred language. The biggest problem for me was the management of the fsharp community, the second class citizen position of fsharp in the DotNet ecosystem, and Microsoft's action screwing the goodwill of the dev community (eg hot reload episode). I feel this hampered the growth of the fsharp community.
I'm now starting to use rust, and the contrast on these points couldn't be bigger.
Edit: downvoters, caring to share why? I thought sharing my experience would have been appreciated. Would like to know why I was wrong.
Use opam: https://opam.ocaml.org or https://opam.ocaml.org/doc/Install.html.
Additionally, see: https://ocaml.org/install#linux_mac_bsd and https://ocaml.org/docs/set-up-editor.
It is easy to set up with Emacs, for example. VSCodium has OCaml extension as well.
All you need for the OCaml compiler is opam, it handles all the packages and the compiler.
For your project, use dune: https://dune.readthedocs.io/en/stable/quick-start.html.
Additionally, see: https://dune.readthedocs.io/en/stable/tutorials/dune-package....
Anyways, what you could do is:
Alternatively, you can use the tarball directly:Can you be more specific?
(Why the down-vote? He does need to be specific, right? Additionally, my experiences are somehow invalid?)
Real life sample:
A function is defined as: That seems pretty hard to read at a glance, and easy to mistype as a definition.Also, you need to end the declaration with `in`?
Then, semicolons...
... and even double semicolons ... That looks like a language you really want an IDE helping you with.I actually really like the syntax of OCaml, its very easy to write and when you're used to it, easy to read (easier than reasonml IMO).
Double semicolons are afaik only used in the repl.
YMMV but let expressions are one of the nice things about OCaml - the syntax is very clean in a way other languages aren't. Yes, the OCaml syntax has some warts, but let bindings aren't one of them.
It's also quite elegant if you consider how multi-argument let can be decomposed into repeated function application, and how that naturally leads to features like currying.
> Also, you need to end the declaration with `in`?
Not if it's a top level declaration.
It might make more sense if you think of the `in` as a scope operator, eg `let x = v in expr` makes `x` available in `expr`.
> Then, semicolons...
Single semicolons are syntactic sugar for unit return values. Eg,
is the same asThe syntax is `let <constantname> <parameters> = <value> in <expression>;;` where `expression` can also have `let` bindings.
So you can have
In essence, an OCaml program is a giant recursive expression, because `expr` can have its own set of let definitions.
In the REPL, this is where the double semicolons come in, as a sort of hint to continue processing after the expression.
The type system usually means that I might take longer to get my code to compile, but that I won’t spend much (if any) time debugging it once I’m done.
I’m in the middle of pulling together bits of a third party library and refactoring them over several days work, and I’m pretty confident that most of the issues I’ll face when done will be relatively obvious runtime ones.
Even the object system which most OCaml developers avoid, is actually very useful for some specific modelling scenarios (usually hierarchies in GUIs or IaC) that comes with similar type system guarantees and minimal type annotations.
Also, I had no idea that the module system had its own type system, that’s wild.
> The idea that the example program could use pattern matching to bind to either test values or production ones is interesting, but I can’t conceptualize what that would look like with the verbal description alone.
The article appears to have described the free monad + interpreter pattern, that is, each business-logic statement doesn't execute the action (as a verb), but instead constructs it as a noun and slots it into some kind of AST. Once you have an AST you can execute it with either a ProdAstVisitor or a TestAstVisitor which will carry out the commands for real.
More specific to your question, it sounds like the pattern matching you mentioned is choosing between Test.ReadFile and Test.WriteFile at each node of the AST (not between Test.ReadFile and Prod.ReadFile.)
I think the Haskell community turned away a little from free monad + interpreter when it was pointed out that the 'tagless final' approach does the same thing with less ceremory, by just using typeclasses.
> I’d have liked to see the use of dependency injection via the effects system expanded upon.
I'm currently doing DI via effects, and I found a technique I'm super happy with:
At the lowest level, I have a bunch of classes & functions which I call capabilities, e.g
These are tightly-focused on doing one thing, and must not know anything about the business. No code here would need to change if I changed jobs.At the next level up I have classes & functions which can know about the business (and the lower level capabilities)
At the top of my stack I have a type called CliApp, which represents all the business logic things I can do, e.g.I associate CliApp to all its actual implementations (low-level and mid-level) using type classes:
In this way, CliApp doesn't have any of 'its own' implementations, it's just a set of bindings to the actual implementations.I can create a CliTestApp which has a different set of bindings, e.g.
Now here's where it gets interesting. Each function (all the way from top to bottom) has its effects explicitly in the type system. If you're unfamiliar with Haskell, a function either having IO or not (in its type sig) is a big deal. Non-IO essentially rules out non-determinism.The low-level prod code (capabilites) are allowed to do IO, as signaled by the MonadIO in the type sig:
but the equivalent test double is not allowed to do IO, per: And where it gets crazy for me is: the high-level business logic (e.g. fetchStoredCommands) will be allowed to do IO if run via CliApp, but will not be allowed to do IO if run via CliTestApp, which for me is 'having my cake and eating it too'.Another way of looking at it is, if I invent a new capability (e.g. Caching) and start calling it from my business logic, the CliTestApp pointing at that same business logic will compile-time error that it doesn't have its own Caching implementation. If I try to 'cheat' by wiring the CliTestApp to the prod Caching (which would make my test cases non-deterministic) I'll get another compile-time error.
Would it work in OCaml? Not sure, the article says:
> Currently, it should be noted that effect propagation is not tracked by the type system
Do you use Haskell professionally? If so, is this sort of DI style common?
F# has FuncUI - based on Avalonia. All just possible because of the ecosystem.
https://github.com/fsprojects/Avalonia.FuncUI
OCaml does have an okay LSP implementation though, and it's getting better; certainly more stable than F#'s in my experience, since that comparison is coming up a lot in this comment section.
Ocaml has been shipping with an actual fully functional reverse debugger for ages.
Is the issue mostly integration with the debugging ui of VS Code?
and yeah integrating to VS Code debugging UI would be ideal
I really like OCaml, so I hope the community can continue to improve the UX of these features
Yes, I see what you mean now. You encountered a bug around system thread and dune didn’t properly pass your artifacts. That’s indeed annoying.
I deeply dislike dune myself and never use it. I just use the Ocaml toolchain like I would a good old C one which might explain our different experiences.
I saw this in the OP:
>For example, creating a binding with the Tk library
and had also been thinking about this separately a few days ago, hence the question.
Nevertheless, I have fond memories of OCaml and a great amount of respect for the language design. Haven't checked on it since, probably should. I hope part of the problems have been solved.
The Dune build system does default to ocamlopt nowadays, although maybe not back around 2020.
Do you have a ballpark value of how much faster Rust is? Also I wonder if OxCaml will be roughly as fast with less effort.
Recovered typeaholic here. I still occasionally use OCaml and I primarily wrote F# and Haskell for years. I've been quite deep down the typing rabbit hole, and I used to scorn at dynamically typed languages.
Now I love dynamic typing - but not the Python kind - I prefer the Scheme kind - latent typing. More specifically, the Kernel[1] kind, which is incredibly powerful.
> I think the negative reputation of static type checking usually stems from a bad experience.
I think this goes two ways. Most people's experience with dynamic typing is the Python kind, and not the Kernel kind.
To be clear, I am not against static typing, and I love OCaml - but there are clear cases where static typing is the wrong tool - or rather, no static typing system is sufficient to express problems that are trivial to write correctly with the right dynamic types.
Moreover, some problems are inherently dynamic. Take for example object-capabilities (aka, security done right). Capabilities can be revoked at any time. It makes no sense to try and encode capabilities into a static type system - but I had such silly thoughts when I was a typeaholic, and I regularly see people making the same mistake. Wouldn't it be better to have a type system which can express things which are dynamic by nature?
And this is my issue with purely statically typed systems: They erase the types! I don't want to erase the types - I want the types to be available at runtime so that I can do things with them that I couldn't do at compile time - without me having to write a whole new interpreter.
My preference is for Gradual Typing[2], which lets us use both worlds. Gradual typing is static typing with a `dynamic` type in the type system, and sensible rules for converting between dynamic and static types - no transitivity in consistency.
People often mistake gradual typing with "optional typing" - the kind that Erlang, Python and Typescript have - but that's not correct. Those are dynamic first, with some static support. Gradual typing is static-first, with dynamic support.
Haskell could be seen as Gradual due to the presence of `Data.Dynamic`, but Haskell's type system, while a good static type system, doesn't make a very good dynamic type system.
Aside, my primary language now is C, which was the first language I learned ~25 years ago. I regressed! I came back to C because I was implementing a gradually typed language and F#/OCaml/Haskell were simply too slow to make it practical, C++/Rust were too opinionated and incompatible with what I want to achieve, and C (GNU dialect) let me have almost complete control over the CPU, which I need to make my own language good enough for practical use. After writing C for a while I learned to love it again. Manually micro-optimizing with inline assembly and SIMD and is fun!
[1]:https://web.cs.wpi.edu/~jshutt/kernel.html
[2]:https://jsiek.github.io/home/WhatIsGradualTyping.html
Could you elaborate on the difference? I was under the impression that "latent typing" just means "values, not variables, have types", which would make Python (without type annotations) latently typed as well.
- No HKTs "in your sense" but: ```ocaml module type S = sig type 'a t end `` `type 'a t` is an Higher Kinded type (but in the module Level). - No typeclasses, yes, for the moment but the first step of https://arxiv.org/pdf/1512.01895 is under review: https://github.com/ocaml/ocaml/pull/13275 - no call-site expansion ? https://ocaml.org/manual/5.0/attributes.html look at the attribute `inline`.
Strong stance on Modules. My ignorance, what do they do that provides that much benefit. ??
Modules are like structurally-typed records that can contain both abstract types and values/functions dependent on those types; every implementation file is itself a module. When passed to functors (module-level functions), they allow you to parameterize large pieces of code, depending on multiple types and functions, all at once quite cleanly. And simply including them or narrowing their signatures is how one exports library APIs.
(The closest equivalent I can imagine to module signatures is Scala traits with abstract type members, but structurally-typed and every package is an instance.)
However, they are a bit too verbose for finer-grained generics. For example, a map with string keys needs `module String_map = Map.Make(String)`. There is limited support for passing modules as first-class values with less ceremony, hopefully with more on the way.
Yes, F# is a very nice language, however, it seems to me that I am making a somewhat forced comparison between OCaml and F# in the following section: https://xvw.lol/en/articles/why-ocaml.html#ocaml-and-f