Interesting article - but perhaps a bit light on details in some places, like:
> I generated a list of the most common interview tasks
How? I suppose they mean gathered, or searched for, not strictly generated?
Also a little light on details of the actual interview.
I'm also a little confused about the listing of "problems" - do they refer to some specific leet-code site's listing of problems?
It seems like half-way between naming an actual algorithm/problem and naming a concrete exercise.
As for:
> How is it that we do not use this "forgotten and forbidden" coding in our daily production code, even though all highly reusable, useful code is essentially an exploitation of the intersection between classical algorithmic thinking and real-world problems?
I'm not sure what to say - most of this stuff lives in library code and data structure implementations for any language in common use?
Indeed the one saving grace of leet code interview is arguably that it shows if the candidate can choose sane data structures (and algorithms) when implementing real-world code?
Very cool, I have personally been studying zk-cryptography with a similar approach, works really well with some caveats. Will save this article and try this version as well when the time comes!
This is very interesting, I've been using LLM to learn new things that way and it really worked. To some extent, learning with LLM is better than taking any course, even with a tutor, as I am getting something prepared for me, in terms of my experience, progress level, etc.
LLM is going to change schools and universities a lot, teachers, tutors will have to find themselves in the new reality, as they have a strong competitor with infinite resources and huge knowledge, patient and ready to work with every student in a distinct way, according to student's needs, level, intelligence, etc.
Instruction-based tutoring is dead from that perspective, why should I follow someone reciting a book or online tutorial, while there is a tool that can introduce me into subject in a better and more interesting way?
Sure, there are great teachers, who are inspiring people, who are able to present the topic in a great way, the point is, they are minority. Now, everyone can have a great tutor for a few dollars a month (or for free, if you don't need generating too much data quickly).
> LLM is going to change schools and universities a lot, teachers, tutors will have to find themselves in the new reality, as they have a strong competitor with infinite resources and huge knowledge, patient and ready to work with every student in a distinct way, according to student's needs, level, intelligence, etc.
No it won't. It really, really wont. You clearly don't have any university professors amongst your friends or acquaintances.
What you wrote is what the STUDENTS think. The students think they have found a cheat code.
No university professor considers LLM "a competitor". They see the slop output every day on their desk.
The reality is just like LLMs will confidently push out slop code, they will also push out slop for everything else. Because the reality is that LLMs are nothing more than a party trick, a stats based algorithm that gives you answers within a gaussian curve.
The students come to the professors with stupid questions because they've been trusting the AI instead of learning properly. Some of the students even have the audacity to challenge the professor's marking saying "but the AI said it is right" in relation to some basic math formula that the student should know how to solve with their own brain.
So what do my university professor friends end up doing ?
They spend their evenings and weekends thinking up lab tasks that the students cannot achieve by simply asking the LLM for the answer. The whole point of university is you go there to learn to reason and think with your own damn brain, not paste the question into a text box and paste the answer to your professor.
Trying to cheat your way through university with an LLM is a waste of the students time, a waste of the professors time and a waste of the university's infrastructure.
I’m an unusually good programmer, I’ve worked in over 25 different programming languages and have been doing it since I was 6. I’ve spent most of my career as an applied researcher in research orgs where my full time job is study.
Finding new relevant things to learn gets progressively more difficult and LLMs have blown that right open. Even if they haze zero new ideas the encoding and searching of existing ideas is nothing live I’ve seen before. If they can teach me things they can definitely teach less experienced people things as well. Sometimes it takes a bit of prodding, like it will insist something is impossible but when presented with evidence to the contrary will resume to give working prototypes. Which means in these very long tail instances it does still help to have some prerequisite knowledge. I wish they were more able to express uncertainty.
I think the primary reason Ed Tech hasn’t been disrupted is that an expensive education is a costly signal and a class demarcator, making it cheaper defeats the primary purpose. Grade creep, reproducibility crisis, plagiarism crisis, cheating scandals fail to undermine this purpose. In fact the worse it gets the more it becomes a costly signal. As inequality increases so does the importance social signals.
In many countries Universities are given special privileges to act as a gateway to permanent residency which is extremely profitable. If anything is to replace education it would have to either supplant this role as a social signal or the reward for the social signal will need to be lost and I don’t see either happening anytime soon short of a major calamity.
LLMs aren't any of these things: infinite, knowledgable, patient, or ready. They are a compressed representation of all of the misstatements and misunderstandings in the history of Reddit. If you think you've been using LLMs to "learn new things" it could be because you aren't already familiar with the domain and you can't see where it's misleading you.
I mean, you've collapsed a complex, mixed system into a single negative narrative.
Examples of how I learn with LLMs:
- Paste sections from reading and ask questions / clarify my understanding / ask it to quiz me
- Produce Anki cards by pasting in chapter text and the culling out the goods ones
- Request resources / links for further learning
Basically, LLMs serve as a thinking partner. Yes, it's a fallible tool, not an oracle. But dismissing the idea that you can learn (and learn faster / more efficiently) with LLMs is reductionist
To some extent. I had Claude (Sonnet 4.5) generate some homework problems for students I was teaching to code, and the problem/answers weren't actually right. They were subtlety wrong, which makes me worry about using it for other subjects.
(not OP but..) I personally am not very into "prompting", you just need to figure out how these models work
it's best when you ask a well known problem/thing they can reference (vs. a niche way to solve exactly what you want to solve)
then you work backwards, I e. why is it like this, what is this for, what are the alternative ways to accomplish this etc...
it's a big query engine after all.
don't try to ask like "what is the exact right way" or etc. because it will try to generate that and likely hallucinate if there is no such answer in its training corpus.
Interesting article - but perhaps a bit light on details in some places, like:
> I generated a list of the most common interview tasks
How? I suppose they mean gathered, or searched for, not strictly generated?
Also a little light on details of the actual interview.
I'm also a little confused about the listing of "problems" - do they refer to some specific leet-code site's listing of problems?
It seems like half-way between naming an actual algorithm/problem and naming a concrete exercise.
As for:
> How is it that we do not use this "forgotten and forbidden" coding in our daily production code, even though all highly reusable, useful code is essentially an exploitation of the intersection between classical algorithmic thinking and real-world problems?
I'm not sure what to say - most of this stuff lives in library code and data structure implementations for any language in common use?
Indeed the one saving grace of leet code interview is arguably that it shows if the candidate can choose sane data structures (and algorithms) when implementing real-world code?
Very cool, I have personally been studying zk-cryptography with a similar approach, works really well with some caveats. Will save this article and try this version as well when the time comes!
This is very interesting, I've been using LLM to learn new things that way and it really worked. To some extent, learning with LLM is better than taking any course, even with a tutor, as I am getting something prepared for me, in terms of my experience, progress level, etc.
LLM is going to change schools and universities a lot, teachers, tutors will have to find themselves in the new reality, as they have a strong competitor with infinite resources and huge knowledge, patient and ready to work with every student in a distinct way, according to student's needs, level, intelligence, etc.
Instruction-based tutoring is dead from that perspective, why should I follow someone reciting a book or online tutorial, while there is a tool that can introduce me into subject in a better and more interesting way?
Sure, there are great teachers, who are inspiring people, who are able to present the topic in a great way, the point is, they are minority. Now, everyone can have a great tutor for a few dollars a month (or for free, if you don't need generating too much data quickly).
> LLM is going to change schools and universities a lot, teachers, tutors will have to find themselves in the new reality, as they have a strong competitor with infinite resources and huge knowledge, patient and ready to work with every student in a distinct way, according to student's needs, level, intelligence, etc.
No it won't. It really, really wont. You clearly don't have any university professors amongst your friends or acquaintances.
What you wrote is what the STUDENTS think. The students think they have found a cheat code.
No university professor considers LLM "a competitor". They see the slop output every day on their desk.
The reality is just like LLMs will confidently push out slop code, they will also push out slop for everything else. Because the reality is that LLMs are nothing more than a party trick, a stats based algorithm that gives you answers within a gaussian curve.
The students come to the professors with stupid questions because they've been trusting the AI instead of learning properly. Some of the students even have the audacity to challenge the professor's marking saying "but the AI said it is right" in relation to some basic math formula that the student should know how to solve with their own brain.
So what do my university professor friends end up doing ?
They spend their evenings and weekends thinking up lab tasks that the students cannot achieve by simply asking the LLM for the answer. The whole point of university is you go there to learn to reason and think with your own damn brain, not paste the question into a text box and paste the answer to your professor.
Trying to cheat your way through university with an LLM is a waste of the students time, a waste of the professors time and a waste of the university's infrastructure.
That, my friend, is the reality.
I’m an unusually good programmer, I’ve worked in over 25 different programming languages and have been doing it since I was 6. I’ve spent most of my career as an applied researcher in research orgs where my full time job is study.
Finding new relevant things to learn gets progressively more difficult and LLMs have blown that right open. Even if they haze zero new ideas the encoding and searching of existing ideas is nothing live I’ve seen before. If they can teach me things they can definitely teach less experienced people things as well. Sometimes it takes a bit of prodding, like it will insist something is impossible but when presented with evidence to the contrary will resume to give working prototypes. Which means in these very long tail instances it does still help to have some prerequisite knowledge. I wish they were more able to express uncertainty.
I think the primary reason Ed Tech hasn’t been disrupted is that an expensive education is a costly signal and a class demarcator, making it cheaper defeats the primary purpose. Grade creep, reproducibility crisis, plagiarism crisis, cheating scandals fail to undermine this purpose. In fact the worse it gets the more it becomes a costly signal. As inequality increases so does the importance social signals. In many countries Universities are given special privileges to act as a gateway to permanent residency which is extremely profitable. If anything is to replace education it would have to either supplant this role as a social signal or the reward for the social signal will need to be lost and I don’t see either happening anytime soon short of a major calamity.
LLMs aren't any of these things: infinite, knowledgable, patient, or ready. They are a compressed representation of all of the misstatements and misunderstandings in the history of Reddit. If you think you've been using LLMs to "learn new things" it could be because you aren't already familiar with the domain and you can't see where it's misleading you.
I mean, you've collapsed a complex, mixed system into a single negative narrative.
Examples of how I learn with LLMs:
- Paste sections from reading and ask questions / clarify my understanding / ask it to quiz me
- Produce Anki cards by pasting in chapter text and the culling out the goods ones
- Request resources / links for further learning
Basically, LLMs serve as a thinking partner. Yes, it's a fallible tool, not an oracle. But dismissing the idea that you can learn (and learn faster / more efficiently) with LLMs is reductionist
To some extent. I had Claude (Sonnet 4.5) generate some homework problems for students I was teaching to code, and the problem/answers weren't actually right. They were subtlety wrong, which makes me worry about using it for other subjects.
I think that Knoll’s law of media accuracy applies quite well to LLMs as well:
> “everything you read in the newspapers is absolutely true, except for the rare story of which you happen to have firsthand knowledge”.
Sounds interesting, can you share some useful prompts for learning?
(not OP but..) I personally am not very into "prompting", you just need to figure out how these models work
it's best when you ask a well known problem/thing they can reference (vs. a niche way to solve exactly what you want to solve)
then you work backwards, I e. why is it like this, what is this for, what are the alternative ways to accomplish this etc...
it's a big query engine after all.
don't try to ask like "what is the exact right way" or etc. because it will try to generate that and likely hallucinate if there is no such answer in its training corpus.
instead ask what the model does know, or doesn't.