> I'll bet Pickle's CEO - Daniel Park - all of their pre-order revenues if they miss their Q2 2026 delivery estimate for US customers. If he delivers in Q2, I'll pay him all the money they've accepted from pre-orders; if he doesn't, he pays me (should he accept).
I've run into similar companies like this in the past in a few different niches and what they were doing are just repeating specs from Chinese OEM suppliers. They were not making their own hardware at all, just reselling it with custom branding and sometimes styling.
It could be the case here? What would explain the accelerated development timeline, it is possible because it isn't their timeline at all, it is someone else's who started a long time ago. And it may be they are talking about their supplier's two year roadmap or something similar.
PS. One of the companies (or more specifically its owners) that was doing did was eventually charged with fraud.
It seems to be the same with the immersed visor. Lots of promises, one barely functional demo in 2024 and they said mass production was delayed to "after summer 2025" but still nobody has one yet.
I was thinking of backing it and I'm so glad I didn't. Immersed has a great app so I don't think this was a blatant con but I do think they bit off more than they could chew.
I spend a lot of time on power opt for small systems. The promises w.r.t. power numbers when taken in conjunction with promised features/brightnesses are 100% a lie given current SoC/battery/cooling tech.
"absolutely no latency" -> only apple manages anything close to this, and that -- with custom silicon that can feed data from the camera to screen while it is still being read out from the camera. A no-name startup doing this ain't happening.
I'm surprised no one has tried to skip cameras all together and use ultrasound. mics use two or maybe even three orders of magnitude for an audio -> object inference stack vs visual. of course you can't detect colors or do A LOT of things, but hey, you could make glasses really look like regular glasses, especially if you got rid of the screens too.
Glasses as a computer form factor is not really proven out yet, but cameras on the glasses are one of the things that people are actually using the Meta Raybans for. One of the primary things people do with them is capture POV video. Take away the cameras and you're left with what. ChatGPT on command and headphones and that's it? The Humane Pin would like a word. People buy smart glasses specifically for a rich feature set, the more the better (because it's a nerd/early adopter product as of now).
And also in the real world people just do not care about cameras on glasses as much as people on HN trot out the glasshole articles from a decade ago. Both smart glasses and phones that are actively recording are everywhere already.
I've yet to see someone wear meta ray bans at work, so at least for me DoA. you're right tho. as for use case, more so a better type of Siri. presumably mics would make it much cheaper as well, on the order of a regular watch (~$100)
> I'm surprised no one has tried to skip cameras all together and use ultrasound.
The ratios of image resolution and viewing distance to physical size are veeeeeeery bad with sound compared to cameras though. Cameras are also completely passive sensors that don't require an attached emitter in most circumstances.
>you could make glasses really look like regular glasses
The cameras are not what makes the glasses bulky and people find a lot of utility in taking and sharing pictures and videos from their glasses. So you'll probably always want to have at least one camera on the product for that use case.
Not sure why you think we have off the shelf miniaturized sonar hardware at scale and shape detection tech that could beat out mobile cameras and computer vision software.
Pretty sure Wayne Tech had the prototype of this sonar-vision translation layer of software all wrapped up way back in 2008, so it’s just a matter of productizing that, and since Pickle seems to deal in fiction already there’s good product synergy.
> Not sure why you think we have off the shelf miniaturized sonar hardware at scale and shape detection tech that could beat out mobile cameras and computer vision software.
uhm I didn't say that - what I am asking is exactly the opposite, in fact and the power glove thing was hardly capitalized. I wouldn't consider that a serious attempt.
I have a degree in Computer Vision, and, whether Pickle is lying about various capabilities or not, this guy is talking completely out of his ass and a whole lot of what he says is just extremely idiotic.
> tracking blah blah 6DoF blah blah IMU
This whole section is just wildly false. Tracking like shown in the video is easily done with just a camera, 1980s-era sparse optical flow, and basic fucking geometry. No IMU needed. People have been doing far more complex and stable motion tracking with no more input than single camera video for literally decades. And this device doesn't just have a camera; it has two HD stereoptic cameras, so they also get a depth map. You can absolutely do what they show with the hardware that Pickle claims is in the glasses.
(If you want a fantastic example, see the intro sequence to the movie Stranger Than Fiction from 2006.)
> It would take time to affix an open source SLAM pipeline and even more for them to build their own.
And this is a complete non sequitur, as SLAM is also not needed for what they show in the video. Nothing shown requires mapping the area. It's also a joke to say that it would "take time to affix an open source SLAM pipeline" unless by "time" he means a few minutes.
> This would indicate either the software is using real-time depth tracking blah blah
The glasses have fucking binocular cameras in them! What the fuck else would they be for?
> But in the photos of Pickle 1, there is no sign of any spot to charge the device.
There is zero reason whatsoever to believe that those images are photos of the final product and not renders or props. It's like he's never seen marketing material before.
I can't even with this.
This guy's LinkedIn bio says "Aug 2022 - Mar 2023: Attended UVA as a first year studying economics and commerce before dropping out to build in VR full time." So it seems he's a self-important child with zero background. That explains a lot tbh.
>1980s-era corner feature detection, and basic fucking geometry
Which are pieces of how SLAM works.
>You can absolutely do what they show with the hardware that Pickle claims is in the glasses.
World locked content is not novel. Existing glasses can do it today. The claim is that Pickle didn't build it. The obvious answer would be that they are using what Qualcomm or someone else built it as opposed to Pickle building all of this within a month.
It absolutely is not. Tracking is needed for mapping, not the other way around.
And it's definitely not needed for what they show in the video that this kid is complaining about. It's not even needed for associating things that go out of view and then come back, though it can help there.
> Which are pieces of how SLAM works.
Screws are pieces of how automobiles work, but it would be foolish to suggest that one needs a Honda Pilot to hang a painting on their wall.
> The claim is that Pickle didn't build it. The obvious answer would be that they are using what Qualcomm or someone else built
Please don't shift goalposts. The claim is that they're lying about capability. And the evidence given for that claim is flat out wrong.
>Tracking is needed for mapping, not the other way around.
The term tracking is being used twice. The tracking data that OpenXR exposes comes from SLAM. SLAM is done doing sensor fusion including signals that come from tracking points.
>And it's definitely not needed for what they show in the video
The video shows 6 dof tracking which for a production implementation would do SLAM tracking.
>for associating things that go out of view and then come back
Having memory of what existed before implies you have a form of a map. You also want a map to be able to match together the views of the multiple cameras.
>Please don't shift goalposts.
The claim I am referring to is, "6DoF with spatial anchoring on a device this small and compute constrained is hard for any company to build, let alone Pickle."
> The video shows 6 dof tracking which for a production implementation would do SLAM tracking.
SLAM is not required to do what is shown in the video. As is an IMU. And an IMU is also not required for SLAM. Everything about the blog post is factually wrong.
> Having memory of what existed before implies you have a form of a map
Once again, you're just wrong here. Image feature correspondence works even without any spatial mapping. Once again, you're getting things backwards. You need to find correspondences before you can begin to make a map, not the other way around!
Anyway, I don't have the energy to argue more with someone who confidently doesn't actually know what they're talking about. So, good luck, have fun.
(I probably should have said sparse feature tracking and not optical flow. People tend to get the wrong idea about what optical flow fundamentally requires. Spatial regularity and density are not inherent but people may assume they need to be.)
Given the amount of tech I own that is supposed to do this (higher end VR headsets with hand tracking, AR glasses with environmental tracking, etc.), I wouldn't dismiss the author's claims.
But I'd be interested in your examples that can achieve what Pickle is offering in a single pair of glasses.
If you really believe Pickle's claim so much and disbelieve OP's analysis so much, you should contact Pickle's CEO to get more info from him. Once you build more trust, you should join the CEO and take on OP's bet.
Pickle's capability claims don't even need to be true for OP's "analysis" to still be extremely factually incorrect on many levels. Also I consider gambling to be on the balance a bad thing and have no desire to encourage it.
I have a sneaking belief that the video clip showing a rack of cookies on a brightly lit counter wasn't shot in the dark or on a red eye flight, so that's a non-sequitur.
Digital camera sensors are all inherently extremely sensitive to infrared anyway and can see quite well in the dark with nothing more than an IR LED if you don't add a physical filter over the sensor, soooo...
> This whole section is just wildly false. Tracking like shown in the video is easily done with just a camera, 1980s-era sparse optical flow, and basic fucking geometry. No IMU needed. People have been doing far more complex and stable motion tracking with no more input than single camera video for literally decades.
(nb. I probably should have said sparse feature tracking and not optical flow. People tend to get the wrong idea about what optical flow fundamentally requires. Spatial regularity and density are not inherent but people may assume they need to be.)
First of all, did you watch the video? (the whole thing is kinda annoying and long, but the part in question here is only about 3 seconds so it's worth looking) Two points about the video: 1) The positioning of the overlay is noticeably unstable in relation to the apparent camera motion, so it doesn't even show what the OP claims it does. 2) You don't have any way to know what the latency is because of that.
Anyway, yes even with, and even in the form factor if you optimize for the right things. The kind of simple feature tracking that can accomplish what's shown in the video was real-time in like 2005, and there have been significant hardware and algorithmic advancements in the past 20 years.
Putting aside whether or not they are a fraud, their design looks so good, unlike Meta's ugly glasses. Which of course, might be because it's the one thing they've spent time on in the past 2 months of dev time, and they may not actually be accounting for any practical manufacturing realities.
I am not going to argue about the design looking “good” or not, but it’s irrelevant if the device doesn’t do what it says it does.
The reason AR glasses are chonky and not sexy is because they have a bunch of hardware and batteries and whatnot that require them to be that shape and size.
Assuming they’re fraudulent, they can make it look like anything they want because it doesn’t do what it purports. I’m sure that RayBan and Meta want them to look better but it’s simply not possible with the technology they have.
> I'll bet Pickle's CEO - Daniel Park - all of their pre-order revenues if they miss their Q2 2026 delivery estimate for US customers. If he delivers in Q2, I'll pay him all the money they've accepted from pre-orders; if he doesn't, he pays me (should he accept).
This guy is serious.
I've run into similar companies like this in the past in a few different niches and what they were doing are just repeating specs from Chinese OEM suppliers. They were not making their own hardware at all, just reselling it with custom branding and sometimes styling.
It could be the case here? What would explain the accelerated development timeline, it is possible because it isn't their timeline at all, it is someone else's who started a long time ago. And it may be they are talking about their supplier's two year roadmap or something similar.
PS. One of the companies (or more specifically its owners) that was doing did was eventually charged with fraud.
>It could be the case here?
I think it's the case, but I also think it will not look or function anything like the mock they showed.
They were not making their own hardware at all, just reselling it with custom branding and sometimes styling.
Probably 99% of the electronics industry these days is like that. Laptops are one of the most commonly OEM'd products.
https://xcancel.com/thedowd/status/2007337800430198913 has some of the info. The main article doesn't work for me on x.com or xcancel (X account only? X app only?).
It seems to be the same with the immersed visor. Lots of promises, one barely functional demo in 2024 and they said mass production was delayed to "after summer 2025" but still nobody has one yet.
I was thinking of backing it and I'm so glad I didn't. Immersed has a great app so I don't think this was a blatant con but I do think they bit off more than they could chew.
I spend a lot of time on power opt for small systems. The promises w.r.t. power numbers when taken in conjunction with promised features/brightnesses are 100% a lie given current SoC/battery/cooling tech.
"absolutely no latency" -> only apple manages anything close to this, and that -- with custom silicon that can feed data from the camera to screen while it is still being read out from the camera. A no-name startup doing this ain't happening.
I'm surprised no one has tried to skip cameras all together and use ultrasound. mics use two or maybe even three orders of magnitude for an audio -> object inference stack vs visual. of course you can't detect colors or do A LOT of things, but hey, you could make glasses really look like regular glasses, especially if you got rid of the screens too.
Glasses as a computer form factor is not really proven out yet, but cameras on the glasses are one of the things that people are actually using the Meta Raybans for. One of the primary things people do with them is capture POV video. Take away the cameras and you're left with what. ChatGPT on command and headphones and that's it? The Humane Pin would like a word. People buy smart glasses specifically for a rich feature set, the more the better (because it's a nerd/early adopter product as of now).
And also in the real world people just do not care about cameras on glasses as much as people on HN trot out the glasshole articles from a decade ago. Both smart glasses and phones that are actively recording are everywhere already.
Well yeah because the rayban has a camera that's what people buy it for. It hardly does anything else (at least the one without screen).
I'd explicitly want one without camera to avoid the 'glasshole effect'.
And yes people do care at least here in Europe. The meta glasses are banned at a lot of events now.
I've yet to see someone wear meta ray bans at work, so at least for me DoA. you're right tho. as for use case, more so a better type of Siri. presumably mics would make it much cheaper as well, on the order of a regular watch (~$100)
> I'm surprised no one has tried to skip cameras all together and use ultrasound.
The ratios of image resolution and viewing distance to physical size are veeeeeeery bad with sound compared to cameras though. Cameras are also completely passive sensors that don't require an attached emitter in most circumstances.
>you could make glasses really look like regular glasses
The cameras are not what makes the glasses bulky and people find a lot of utility in taking and sharing pictures and videos from their glasses. So you'll probably always want to have at least one camera on the product for that use case.
I'm not talking about bulk - I'm talking about the fact that regular glasses don't have cameras on them, thus don't look like regular glasses.
They tried it back with the Powerglove.
Not sure why you think we have off the shelf miniaturized sonar hardware at scale and shape detection tech that could beat out mobile cameras and computer vision software.
Pretty sure Wayne Tech had the prototype of this sonar-vision translation layer of software all wrapped up way back in 2008, so it’s just a matter of productizing that, and since Pickle seems to deal in fiction already there’s good product synergy.
> Not sure why you think we have off the shelf miniaturized sonar hardware at scale and shape detection tech that could beat out mobile cameras and computer vision software.
uhm I didn't say that - what I am asking is exactly the opposite, in fact and the power glove thing was hardly capitalized. I wouldn't consider that a serious attempt.
I have a degree in Computer Vision, and, whether Pickle is lying about various capabilities or not, this guy is talking completely out of his ass and a whole lot of what he says is just extremely idiotic.
> tracking blah blah 6DoF blah blah IMU
This whole section is just wildly false. Tracking like shown in the video is easily done with just a camera, 1980s-era sparse optical flow, and basic fucking geometry. No IMU needed. People have been doing far more complex and stable motion tracking with no more input than single camera video for literally decades. And this device doesn't just have a camera; it has two HD stereoptic cameras, so they also get a depth map. You can absolutely do what they show with the hardware that Pickle claims is in the glasses.
(If you want a fantastic example, see the intro sequence to the movie Stranger Than Fiction from 2006.)
> It would take time to affix an open source SLAM pipeline and even more for them to build their own.
And this is a complete non sequitur, as SLAM is also not needed for what they show in the video. Nothing shown requires mapping the area. It's also a joke to say that it would "take time to affix an open source SLAM pipeline" unless by "time" he means a few minutes.
> This would indicate either the software is using real-time depth tracking blah blah
The glasses have fucking binocular cameras in them! What the fuck else would they be for?
> But in the photos of Pickle 1, there is no sign of any spot to charge the device.
There is zero reason whatsoever to believe that those images are photos of the final product and not renders or props. It's like he's never seen marketing material before.
I can't even with this.
This guy's LinkedIn bio says "Aug 2022 - Mar 2023: Attended UVA as a first year studying economics and commerce before dropping out to build in VR full time." So it seems he's a self-important child with zero background. That explains a lot tbh.
SLAM is needed for world locked content.
>1980s-era corner feature detection, and basic fucking geometry
Which are pieces of how SLAM works.
>You can absolutely do what they show with the hardware that Pickle claims is in the glasses.
World locked content is not novel. Existing glasses can do it today. The claim is that Pickle didn't build it. The obvious answer would be that they are using what Qualcomm or someone else built it as opposed to Pickle building all of this within a month.
> SLAM is needed for world locked content.
It absolutely is not. Tracking is needed for mapping, not the other way around.
And it's definitely not needed for what they show in the video that this kid is complaining about. It's not even needed for associating things that go out of view and then come back, though it can help there.
> Which are pieces of how SLAM works.
Screws are pieces of how automobiles work, but it would be foolish to suggest that one needs a Honda Pilot to hang a painting on their wall.
> The claim is that Pickle didn't build it. The obvious answer would be that they are using what Qualcomm or someone else built
Please don't shift goalposts. The claim is that they're lying about capability. And the evidence given for that claim is flat out wrong.
>Tracking is needed for mapping, not the other way around.
The term tracking is being used twice. The tracking data that OpenXR exposes comes from SLAM. SLAM is done doing sensor fusion including signals that come from tracking points.
>And it's definitely not needed for what they show in the video
The video shows 6 dof tracking which for a production implementation would do SLAM tracking.
>for associating things that go out of view and then come back
Having memory of what existed before implies you have a form of a map. You also want a map to be able to match together the views of the multiple cameras.
>Please don't shift goalposts.
The claim I am referring to is, "6DoF with spatial anchoring on a device this small and compute constrained is hard for any company to build, let alone Pickle."
> The video shows 6 dof tracking which for a production implementation would do SLAM tracking.
SLAM is not required to do what is shown in the video. As is an IMU. And an IMU is also not required for SLAM. Everything about the blog post is factually wrong.
> Having memory of what existed before implies you have a form of a map
Once again, you're just wrong here. Image feature correspondence works even without any spatial mapping. Once again, you're getting things backwards. You need to find correspondences before you can begin to make a map, not the other way around!
Anyway, I don't have the energy to argue more with someone who confidently doesn't actually know what they're talking about. So, good luck, have fun.
(I probably should have said sparse feature tracking and not optical flow. People tend to get the wrong idea about what optical flow fundamentally requires. Spatial regularity and density are not inherent but people may assume they need to be.)
Given the amount of tech I own that is supposed to do this (higher end VR headsets with hand tracking, AR glasses with environmental tracking, etc.), I wouldn't dismiss the author's claims.
But I'd be interested in your examples that can achieve what Pickle is offering in a single pair of glasses.
If you really believe Pickle's claim so much and disbelieve OP's analysis so much, you should contact Pickle's CEO to get more info from him. Once you build more trust, you should join the CEO and take on OP's bet.
Pickle's capability claims don't even need to be true for OP's "analysis" to still be extremely factually incorrect on many levels. Also I consider gambling to be on the balance a bad thing and have no desire to encourage it.
Well I hope you never get into this business because I doubt your glasses will work in the dark or on red eye flights.
I have a sneaking belief that the video clip showing a rack of cookies on a brightly lit counter wasn't shot in the dark or on a red eye flight, so that's a non-sequitur.
Digital camera sensors are all inherently extremely sensitive to infrared anyway and can see quite well in the dark with nothing more than an IR LED if you don't add a physical filter over the sensor, soooo...
> This whole section is just wildly false. Tracking like shown in the video is easily done with just a camera, 1980s-era sparse optical flow, and basic fucking geometry. No IMU needed. People have been doing far more complex and stable motion tracking with no more input than single camera video for literally decades.
Not with imperceptible latency
(nb. I probably should have said sparse feature tracking and not optical flow. People tend to get the wrong idea about what optical flow fundamentally requires. Spatial regularity and density are not inherent but people may assume they need to be.)
First of all, did you watch the video? (the whole thing is kinda annoying and long, but the part in question here is only about 3 seconds so it's worth looking) Two points about the video: 1) The positioning of the overlay is noticeably unstable in relation to the apparent camera motion, so it doesn't even show what the OP claims it does. 2) You don't have any way to know what the latency is because of that.
Anyway, yes even with, and even in the form factor if you optimize for the right things. The kind of simple feature tracking that can accomplish what's shown in the video was real-time in like 2005, and there have been significant hardware and algorithmic advancements in the past 20 years.
Putting aside whether or not they are a fraud, their design looks so good, unlike Meta's ugly glasses. Which of course, might be because it's the one thing they've spent time on in the past 2 months of dev time, and they may not actually be accounting for any practical manufacturing realities.
I am not going to argue about the design looking “good” or not, but it’s irrelevant if the device doesn’t do what it says it does.
The reason AR glasses are chonky and not sexy is because they have a bunch of hardware and batteries and whatnot that require them to be that shape and size.
Assuming they’re fraudulent, they can make it look like anything they want because it doesn’t do what it purports. I’m sure that RayBan and Meta want them to look better but it’s simply not possible with the technology they have.