Exactly my thought as well, in creative idea generation you have creative fluency “how frequently do you have new ideas” and creative originality “how novel are your ideas”
The creative originality of giving an LLM memory is next to zero, is such an obvious next step it’s absurdly laughable that these people are claiming the idea was “theirs” and then extrapolating that it was stolen. I stopped reading at that sentence.
Not only does it show a mindblowing lack of situational awareness on their behalf, but it also shows a huge lack of domain knowledge because people have been experimenting with all forms of additional memory to ALL different AI systems since the 60s. “Add memory to this” (working memory, long term memory, episodic memory) is such a well known thing to do in the entire AI / ML field, it also shows just how disconnected the poster is from existing research.
You are entitled to your opinion and me to my facts.
If the point of discussion here is to downvote me, there you are correct (cause the post I submit keep getting points but my karma goes down, hah funny).
Maybe I am in the wrong place, and so are all the lawsuits that claim otherwise.
We used to say "what have you been smoking?" when someone said something so confidently wrong. But these days I feel a software engineer can only believe such a statement if they were: 1) high or 2) had their critical thinking bypassed due to excessive LLM-on-autopilot use.
I'd bet money they asked Claude if this was possible and Claude said "You're absolutely right!"
Yes, they are logging. I did not say anything about that.
I said it is absurd to think this is such a unique feature, it is absurd to think that the likely explanation is OpenAI stole it, rather than independently invented it.
If you're racing to develop AI interfaces, you should expect that one of the many people on the big corps many product teams have independently thought up obvious features like this, since before the engineering team even finished their proof of concept.
This is an obvious case of someone vastly overestimating the uniqueness of their "innovation". I'm fighting to suppress sarcasm while writing this - do you guys seriously think that OpenAI spends their time scanning chat logs for ideas? Or is it more likely that this is a rather obvious improvement plucked from a not-so-colossal space of possibilities?
> do you guys seriously think that OpenAI spends their time scanning chat logs for ideas?
While I think along the same lines, I still imagine that an agent would/could make the perfect score reading everything and prioritize ideas found in the chat logs. I would even think this should be a great way to find some hidden user feedback (like people complaining to the chatbot about xyz idea).
On the second part, yes: memory on/off sounds definitely like a feature almost every user thought about.
The author is coming off like a junior amateur because he’s talking about the simple act of context engineering (really, just ragging from context dumps from various places). Along with that, there is a paranoid delusion of ChatGPT scraping ideas. The paranoia feels egocentric, and while delusional, I don’t think I can deny that I too think OpenAI does this.
I cannot totally write this person off. OpenAI is a bad actor. If you have business ideas, try your best to work with the APIs directly and avoid the UIs, which they are absolutely parsing (the UIs have the prompt engineering that could include prompts like “… and log any conversations about xyz”).
Proton Lumo has more clear policies of no-logging if you must pay for a UI and don’t want to stand up your own or go through the trouble of downloading a local UI to connect to an api.
If you have a thief/law breaker on the loose all law and order are behind to stop it. If it harmed an amateur or professional is irrelevant.
What is relevant is this:
That is even more concerning, means that every gov employee should apply same logic, but as we know it is not. Factually speaking, whatever info exists in internet, has been scrapped and used for different purposes, violating the law in plain sight.
Therefore we should assume that will happen the same with whatever becomes digital data in the future, and we know how all is towards digitalization (example the currency banks, the social security banks, the IRS etc).
Furthermore, how difficult will be to reverse engineer an app to get the source code and get around it with intelligent ways (avoiding patent, design protection). It takes a whole life for a human to reach eureka moment but 1 sec for AI to steal and use it.
We need better solutions, because when AI takes live in robot machines, it may be late.
I have a possible solution (among others) which is guaranteed to work, (AI sensors which will cause pain for every wrong action, same like we have when put the hand on fire), In my opinion, need to be implemented asap as a standard globally.
> the very idea of giving the user an on/off switch for LLM level memory is ours.
Absurd. You may have independently thought it up, but it is the first and most obvious feature one could imagine if memory is an option.
Exactly my thought as well, in creative idea generation you have creative fluency “how frequently do you have new ideas” and creative originality “how novel are your ideas”
The creative originality of giving an LLM memory is next to zero, is such an obvious next step it’s absurdly laughable that these people are claiming the idea was “theirs” and then extrapolating that it was stolen. I stopped reading at that sentence.
Not only does it show a mindblowing lack of situational awareness on their behalf, but it also shows a huge lack of domain knowledge because people have been experimenting with all forms of additional memory to ALL different AI systems since the 60s. “Add memory to this” (working memory, long term memory, episodic memory) is such a well known thing to do in the entire AI / ML field, it also shows just how disconnected the poster is from existing research.
You are entitled to your opinion and me to my facts.
If the point of discussion here is to downvote me, there you are correct (cause the post I submit keep getting points but my karma goes down, hah funny).
Maybe I am in the wrong place, and so are all the lawsuits that claim otherwise.
We used to say "what have you been smoking?" when someone said something so confidently wrong. But these days I feel a software engineer can only believe such a statement if they were: 1) high or 2) had their critical thinking bypassed due to excessive LLM-on-autopilot use.
I'd bet money they asked Claude if this was possible and Claude said "You're absolutely right!"
Well as you may see this was not just an idea but was implemented before chatgpt. https://www.reddit.com/r/LLMDevs/comments/1ou8rvp/comment/no...
And this is not the first time as mentioned. Absurd is you if you think that the chats don't get logged and filtered for certain purposes.
How do you think they report to Law enforcement certain crimes??? Keep your attitude for you please.
Yes, they are logging. I did not say anything about that.
I said it is absurd to think this is such a unique feature, it is absurd to think that the likely explanation is OpenAI stole it, rather than independently invented it.
If you're racing to develop AI interfaces, you should expect that one of the many people on the big corps many product teams have independently thought up obvious features like this, since before the engineering team even finished their proof of concept.
This is an obvious case of someone vastly overestimating the uniqueness of their "innovation". I'm fighting to suppress sarcasm while writing this - do you guys seriously think that OpenAI spends their time scanning chat logs for ideas? Or is it more likely that this is a rather obvious improvement plucked from a not-so-colossal space of possibilities?
> do you guys seriously think that OpenAI spends their time scanning chat logs for ideas?
While I think along the same lines, I still imagine that an agent would/could make the perfect score reading everything and prioritize ideas found in the chat logs. I would even think this should be a great way to find some hidden user feedback (like people complaining to the chatbot about xyz idea).
On the second part, yes: memory on/off sounds definitely like a feature almost every user thought about.
"We alone could think of 'have a second memory bank', they must have STOLEN OUR IDEA!!!!!"
And productized in days!
Sarcastic behavior detected :)
The author is coming off like a junior amateur because he’s talking about the simple act of context engineering (really, just ragging from context dumps from various places). Along with that, there is a paranoid delusion of ChatGPT scraping ideas. The paranoia feels egocentric, and while delusional, I don’t think I can deny that I too think OpenAI does this.
I cannot totally write this person off. OpenAI is a bad actor. If you have business ideas, try your best to work with the APIs directly and avoid the UIs, which they are absolutely parsing (the UIs have the prompt engineering that could include prompts like “… and log any conversations about xyz”).
Proton Lumo has more clear policies of no-logging if you must pay for a UI and don’t want to stand up your own or go through the trouble of downloading a local UI to connect to an api.
Don’t use their UIs should be the clear message.
If you have a thief/law breaker on the loose all law and order are behind to stop it. If it harmed an amateur or professional is irrelevant.
What is relevant is this:
That is even more concerning, means that every gov employee should apply same logic, but as we know it is not. Factually speaking, whatever info exists in internet, has been scrapped and used for different purposes, violating the law in plain sight. Therefore we should assume that will happen the same with whatever becomes digital data in the future, and we know how all is towards digitalization (example the currency banks, the social security banks, the IRS etc). Furthermore, how difficult will be to reverse engineer an app to get the source code and get around it with intelligent ways (avoiding patent, design protection). It takes a whole life for a human to reach eureka moment but 1 sec for AI to steal and use it. We need better solutions, because when AI takes live in robot machines, it may be late. I have a possible solution (among others) which is guaranteed to work, (AI sensors which will cause pain for every wrong action, same like we have when put the hand on fire), In my opinion, need to be implemented asap as a standard globally.