It's not exactly a new technique but it's effective for most super targeted attacks, honestly it seems if you were this inclined to be able to get a specific app on the users phone, you might as well just work off the Android app you've already gotten delivered to the users phone. Like Facebook.
Throw a privacy notice to the users "This app will take periodic screenshots of your phone" You'd be amazed how many people will accept it.
> Did you release the source code of Pixnapping?
We will release the source code at this link once patches become available: https://github.com/TAC-UCB/pixnapping
It's not exactly impossible to reverse what's happening here. You could have waited until it was patched but sounds like you wanted to get your own attention as soon as possible.
A patch for the original vulnerability is already public: https://android.googlesource.com/platform/frameworks/native/... and explicitly states in the commit message that it tries to defeat "pixel stealing by measuring how long it takes to perform a blur across windows."
The researchers aren't releasing their code because they found a workaround to the patch.
Then there's a bunch of "no GPU vendor has committed to patching GPU.zip" and "Google has not committed to patching our app list bypass vulnerability. They resolved our report as “Won’t fix (Infeasible)”."
And their original disclosure was on February 24, 2025, so I don't think you can accuse them of being too impatient.
As for "This app will take periodic screenshots of your phone", you still need an exploit to screenshot things that are explicitly excluded from screenshots (even if the user really wants to screenshot them.)
Bunnie's Precursor? It sounds cool, but it's also expensive as fuck. If you thought $100 for a graphing calculator was a ripoff, the Precursor is a similar form factor and level of computational power, but costs $1000 and can't be used in maths exams.
In the previous discussion everyone seems happy it’s been patched and not to worry (even though androids mostly don’t run anything like the latest android)
But in this write up they say the patch doesn’t work fully
The bigger issue is the sidechannel that exists which leaks information from secure windows, even from protected buffers, potentially including DRM protected content.
While these blurs make the sidechannel easier to use as it provides a clear signal, considering you can predict the exact contents of the screen I feel like you could get away with just a mask.
Not a phone designer, but could we imagine a new class of screen region which is excluded from screen grab, draw over and soft focus with a mask, and then notification which do otp or pin subscribe to use it?
App developers can already dynamically mark their windows as secure which should prevent any other app from reading the pixels it rendered. The compositor composites all windows, including secure windows and applies any effects like blur. No apps are supposed to be able to see this final composited image, but this attack uses a side channel they found that allows apps on the system to learn information about the pixels within the final composition.
The attack needs you to be able to alter the blur of pixels in a secure window; this could be forbidden. A secure window should draw 100% as requested or not at all.
The blur happens in the compositor. It doesn't happen in the secure windows.
>A secure window should draw 100% as requested or not at all.
Take for example "night mode" which adds an orange tint to everything. If secure windows don't get such an orange tint they will look out of place. Being able to do post processing effects on secure windows is desirable, so as I said there is a trade off here in figuring out what should be allowed.
> Take for example "night mode" which adds an orange tint to everything. If secure windows don't get such an orange tint they will look out of place. Being able to do post processing effects on secure windows is desirable, so as I said there is a trade off here in figuring out what should be allowed.
These sort of restrictions also often interfere with accessibility and screen readers.
Either the screen reader is built into the OS as signed + trusted (and locks out competition in this space), or it's a pluggable interface, that opens an attack surface to read secure parts of the screen.
Right but night mode is built into the OS so you can easily make an exception (same for things like toasts). Are there use cases where you need a) a secure window, and b) a semi-transparent app-controlled window on top of it?
Things like this make me wonder if the social media giants use attacks like these to gain certain info about you and advertise to you that way.
Either that or Meta's ability to track/influence emotional state by behaviour is that good that they can advertise to me things I've only thought of and not uttered or even searched anywhere.
>"It looks like the IT security world has hit a new low," Torvalds begins. "If you work in security, and think you have some morals, I think you might want to add the tag-line: "No, really, I'm not a whore. Pinky promise" to your business card. Because I thought the whole industry was corrupt before, but it's getting ridiculous," he continues. "At what point will security people admit they have an attention-whoring problem?"
My takeaway:
Do not install apps. Use websites.
Apps have way too much permissions, even when they have "no permissions".
The unfortunate truth is that so many things require a dedicated mobile app these days to use.
I don't own or carry a smart phone. I'm still able to get by without one, but just barely.
I wish Uber or Lyft allowed me to use a website. I hate having to find a regular taxi or rely on the kindness of others to use their app.
It's not exactly a new technique but it's effective for most super targeted attacks, honestly it seems if you were this inclined to be able to get a specific app on the users phone, you might as well just work off the Android app you've already gotten delivered to the users phone. Like Facebook.
Throw a privacy notice to the users "This app will take periodic screenshots of your phone" You'd be amazed how many people will accept it.
> Did you release the source code of Pixnapping? We will release the source code at this link once patches become available: https://github.com/TAC-UCB/pixnapping
It's not exactly impossible to reverse what's happening here. You could have waited until it was patched but sounds like you wanted to get your own attention as soon as possible.
A patch for the original vulnerability is already public: https://android.googlesource.com/platform/frameworks/native/... and explicitly states in the commit message that it tries to defeat "pixel stealing by measuring how long it takes to perform a blur across windows."
The researchers aren't releasing their code because they found a workaround to the patch.
Then there's a bunch of "no GPU vendor has committed to patching GPU.zip" and "Google has not committed to patching our app list bypass vulnerability. They resolved our report as “Won’t fix (Infeasible)”."
And their original disclosure was on February 24, 2025, so I don't think you can accuse them of being too impatient.
As for "This app will take periodic screenshots of your phone", you still need an exploit to screenshot things that are explicitly excluded from screenshots (even if the user really wants to screenshot them.)
You know it's serious because it's got a domain and a logo. Even security researchers gotta create engagement and develop their brand.
Anyone remembers the OG heartbleed?
I was looking for a nice browser game, just judging by the name.
Modern devices are simply too complex to be completely secure.
We have this tendency of adding more and more "features", more and more functionality 85% of which nobody asked for or has use for.
I believe that there will be a market for a small, bare bones secure OS in the future. Akin to how freeBSD is being run.
Bunnie's Precursor? It sounds cool, but it's also expensive as fuck. If you thought $100 for a graphing calculator was a ripoff, the Precursor is a similar form factor and level of computational power, but costs $1000 and can't be used in maths exams.
https://www.bunniestudios.com/blog/2020/introducing-precurso... (currently down, might be up later)
Discussion: https://news.ycombinator.com/item?id=45574613
In the previous discussion everyone seems happy it’s been patched and not to worry (even though androids mostly don’t run anything like the latest android)
But in this write up they say the patch doesn’t work fully
The bigger issue is the sidechannel that exists which leaks information from secure windows, even from protected buffers, potentially including DRM protected content.
While these blurs make the sidechannel easier to use as it provides a clear signal, considering you can predict the exact contents of the screen I feel like you could get away with just a mask.
Not a phone designer, but could we imagine a new class of screen region which is excluded from screen grab, draw over and soft focus with a mask, and then notification which do otp or pin subscribe to use it?
App developers can already dynamically mark their windows as secure which should prevent any other app from reading the pixels it rendered. The compositor composites all windows, including secure windows and applies any effects like blur. No apps are supposed to be able to see this final composited image, but this attack uses a side channel they found that allows apps on the system to learn information about the pixels within the final composition.
The attack needs you to be able to alter the blur of pixels in a secure window; this could be forbidden. A secure window should draw 100% as requested or not at all.
The blur happens in the compositor. It doesn't happen in the secure windows.
>A secure window should draw 100% as requested or not at all.
Take for example "night mode" which adds an orange tint to everything. If secure windows don't get such an orange tint they will look out of place. Being able to do post processing effects on secure windows is desirable, so as I said there is a trade off here in figuring out what should be allowed.
> Take for example "night mode" which adds an orange tint to everything. If secure windows don't get such an orange tint they will look out of place. Being able to do post processing effects on secure windows is desirable, so as I said there is a trade off here in figuring out what should be allowed.
That seems well worth the trade to me.
These sort of restrictions also often interfere with accessibility and screen readers.
Either the screen reader is built into the OS as signed + trusted (and locks out competition in this space), or it's a pluggable interface, that opens an attack surface to read secure parts of the screen.
Right but night mode is built into the OS so you can easily make an exception (same for things like toasts). Are there use cases where you need a) a secure window, and b) a semi-transparent app-controlled window on top of it?
Things like this make me wonder if the social media giants use attacks like these to gain certain info about you and advertise to you that way.
Either that or Meta's ability to track/influence emotional state by behaviour is that good that they can advertise to me things I've only thought of and not uttered or even searched anywhere.
Consider that your thoughts are a consequence of what you've consumed. They're not guessing what you think, they're influencing it.
Similar people thinking similar thoughts I'd wager
Are you sure that isn't just the horoscope effect?
[dead]
[dead]
Huh. I don’t know that I’ve seen a whole domain name registered, for a paper on a single CVE, before.
It started at least since https://www.heartbleed.com/ if not earlier
It's quite standard for "big" CVEs nowadays
I'd say that it started with heartbleed.
Maybe Linus has a point
>"It looks like the IT security world has hit a new low," Torvalds begins. "If you work in security, and think you have some morals, I think you might want to add the tag-line: "No, really, I'm not a whore. Pinky promise" to your business card. Because I thought the whole industry was corrupt before, but it's getting ridiculous," he continues. "At what point will security people admit they have an attention-whoring problem?"
https://www.techpowerup.com/242340/linus-torvalds-slams-secu...
Interesting. Looks like I upset someone. Not sure why admitting to ignorance is so offensive. Maybe because it's so rare, hereabouts?