NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
It's easier than ever to de-censor videos (jeffgeerling.com)
lynndotpy 46 minutes ago [-]
> Years ago it would've required a supercomputer and a PhD to do this stuff

This isn't actually true. You could do this 20 years ago on a consumer laptop, and you don't need the information you get for free from text moving under a filter either.

What you need is the ability to reproduce the conditions the image was generated and pixelated/blurred under. If the pixel radius only encompasses, say, 4 characters, then you only need to search for those 4 characters first. And then you can proceed to the next few characters represented under the next pixelated block.

You can think of pixelation as a bad hash which is very easy to find a preimage for.

No motion necessary. No AI necessary. No machine learning necessary.

The hard part is recreating the environment though, and AI just means you can skip having that effort and know-how.

cogman10 40 minutes ago [-]
In fact, there was a famous de-censoring that happened because the censoring which happened was a simple "whirlpool" algorithm that was very easy to unwind.

If media companies want to actually censor something, nothing does better than a simple black box.

lynndotpy 33 minutes ago [-]
Ah yes, Mr. Swirl Face.

This was pretty different though. The decensoring algorithm I'm describing is just a linear search. But pixelation is not an invertible transformation.

Mr. Swirl Face just applied a swirl to his face, which is invertible (-ish, with some data lost), and could naively be reversed. (I am pretty sure someone on 4chan did it before the authorities did, but this might just be an Internet Legend).

Modified3019 21 minutes ago [-]
A long while ago, taking an image (typically porn), scrambling a portion of it, and having others try to figure out how to undo the scrambling was a game played on various chans.
cjbgkagh 19 minutes ago [-]
Christopher Paul Neil is a real person who went to jail.
thehappypm 26 minutes ago [-]
this gets exponentially harder with a bigger blur radius, though.
22 minutes ago [-]
JKCalhoun 2 hours ago [-]
Yeah, that is pretty wild.

I recall a co-worker doing something related(?) for a kind of fun tech demo some ten years or so ago. If I recall it was shooting video while passing a slightly ajar office door. His code reconstructed the full image of the office from the "traveling slit".

I think about that all the time when I find myself in a public bathroom stall.... :-/

nkrisc 50 minutes ago [-]
> I think about that all the time when I find myself in a public bathroom stall.... :-/

Walk past a closed bathroom stall fast enough and you can essentially do that with your own eyes. Or stand there and quickly shift your head side to side. Just don't do it on one that's occupied, that's not cool.

altruios 2 minutes ago [-]
The dither effect. Same as seeing through splayed fingers on a franticly oscillating hand.
donatj 45 minutes ago [-]
"Sir, why do you keep running back and forth in the bathroom?"
rosswilson 11 minutes ago [-]
This reminds me of https://github.com/jo-m/trainbot, a neat example of stitching together frames of passing trains to form a panorama.

This frontend presents them nicely: https://trains.jo-m.ch

Agree2468 1 hours ago [-]
Line scan cameras operate on this principle, and are still used in various ways to this days. I'm especially partial to the surreal photos generated by them at the end of cycling races

https://finishlynx.com/photo-finish-trentin-sagan-tour-de-fr...

JKCalhoun 38 minutes ago [-]
I was not aware of those.

Reminds me of slit-scan as well. And of course rolling shutters.

MisterTea 32 minutes ago [-]
> His code reconstructed the full image of the office from the "traveling slit".

This method is commonly used in vision systems employing line scan cameras. They are useful in situations where the objects are moving, e.g. along conveyors.

geerlingguy 18 minutes ago [-]
Even today most cameras have some amount of rolling shutter—the readout on a high-megapixel sensor is too slow/can't hold the entire sensor in memory instantaneously, so you get a vertical shift to the lines as they're read from top to bottom.

Global shutter sensors of similar resolution are usually a bit more expensive.

With my old film cameras, at higher shutter speeds, instead of opening the entire frame, it would pass a slit of the front/rear shutter curtain over the film to just expose in a thousandth of a second or less time.

AdmiralAsshat 2 hours ago [-]
My Windows-98 approved method for redacting a screenshot:

1) Open screenshot in MS-Paint (can you even install MS-Paint anymore? Or is it Paint3D now?)

2) Select Color 1: Black

3) Select Color 2: Black

4) Use rectangular selection tool to select piece of text I want to censor.

5) Click the DEL key. The rectangle should now be solid black.

6) Save the screenshot.

As far as I know, AI hasn't figured out a way to de-censor solid black yet.

its-summertime 1 hours ago [-]
There was a programming competition, can't remember which, similar to IOCCC but more about problematic software? where the redaction was reversible despite being pure black, due to the format chosen allowing for left over information in the image (vastly reduced quality but it was enough to allow text to be recovered!) [edit: see replies!]

There was also the Android (and iOS?) truncation issue where parts of the original image were preserved if the edited image took up less space. [edit: also see replies!]

Knowing some formats have such flaws (and I'm too lazy to learn which), I think the best option I think is to replace step 6 with "screenshot the redacted image", so in effect its a completely new image based on what the redacted image looks like, not on any potential intricacies of the format et al.

qwertox 53 minutes ago [-]
Maybe you're referring to "aCropalypse". Also there was an issue once where sections with overpainted solid black color still retained the information in the alpha channel.

https://www.wired.com/story/acropalyse-google-markup-windows...

https://www.lifewire.com/acropalypse-vulnerability-shows-why...

Modified3019 14 minutes ago [-]
I also recall at one point some image file format that ended up leaking sensitive info, because it had a embedded preview or compressed image, and the editing program failed to regenerate the preview after a censor attempt.

Was a loooong time ago, so I don’t remember the details.

fullstop 5 minutes ago [-]
AT&T leaked information, as did the US Attorney's Office, when they released PDFs with redacted information. To redact, they changed the background of the text to match the color of the text. You could still copy and paste the text block to reveal the original contents.

https://www.cnet.com/tech/tech-industry/at-38t-leaks-sensiti...

fanf2 54 minutes ago [-]
You are thinking of John Meacham’s winning entry in the 2008 underhanded C contest https://www.underhanded-c.org/_page_id_17.html
ZeWaka 59 minutes ago [-]
There's tricks like this with embedded thumbnails.
googlryas 53 minutes ago [-]
The underhanded C contest: https://www.underhanded-c.org/
Retr0id 18 minutes ago [-]
> AI hasn't figured out a way to de-censor solid black yet.

I did though, under certain circumstances. Microsoft's Snipping Tool was vulnerable to the "acropalypse" vulnerability - which mostly affected the cropping functionality, but could plausibly affect images with blacked-out regions too, if the redacted region was a large enough fraction of the overall image.

The issue was that if your edited image had a smaller file size than the original, only the first portion of the file was overwritten, leaving "stale" data in the remainder, which could be used to reconstruct a portion of the unedited image.

To mitigate this in a more paranoid way (aside from just using software that isn't broken) you could re-screenshot your edited version.

a2128 1 hours ago [-]
> can you even install MS-Paint anymore? Or is it Paint3D now?

Paint3D, the successor to MSPaint, is now discontinued in favor of MSPaint, which doesn't support 3d but it now has Microsoft account sign-in and AI image generation that runs locally on your Snapdragon laptop's NPU but still requires you to be signed in and connected to the internet to generate images. Hope that clears things up

Arubis 1 hours ago [-]
What I love about this method is that it so closely matches what actual US govt censors do with documents pending release: take a copy, black it out with solid black ink, then _take a photocopy of that_ and use the photocopy for distribution.
devmor 54 minutes ago [-]
This is similar to how I censor images on a cellphone. I use an editor to cover what I want to censor with a black spot, then take a screenshot of that edited image and delete the original.
JimDabell 1 hours ago [-]
It’s possible, depending upon the circumstances. If you are censoring a particular extract of text and it uses a proportional font, then only certain combinations of characters will fit in a given space. Most of those combinations will be gibberish, leaving few combinations – perhaps only one – that has both matching metrics and meaning.
HPsquared 59 minutes ago [-]
Not forgetting subpixel rendering.
lynndotpy 37 minutes ago [-]
Solid color would convey far less information, but it would still convey a minimum length of the secret text. If you can assume the font rendering parameters, this helps a ton.

As a simple scenario with monospace font rendering, say you know someone is censoring a Windows password that is (at most) 16 characters long. This significantly narrows the search space!

graypegg 27 minutes ago [-]
That sort of makes me wonder if the best form of censoring would be solid black shape, THEN passing it through some diffusion image generation step to infill the black square. It will be obvious that it's fake, but it'll make determining the "edges" of the censored area a lot harder. (Might also be a bit less distracting than a big black shape, for your actual non-advisarial viewers!)
layer8 1 hours ago [-]
If you want the blurred/pixelated look, blur/pixelate something else (like a lorem ipsum) and copy it over to the actual screenshot.
gruez 45 minutes ago [-]
>2) Select Color 1: Black

You don't need this step. It already defaults to black, and besides when you do "delete" it doesn't use color 1 at all, only color 2.

remram 1 hours ago [-]
murdockq 24 minutes ago [-]
Wow glad to see there were other fans of MSPaint, can't believe I built my open source version with wxWidgets 16 years ago https://github.com/murdockq/OpenPaint
layman51 1 hours ago [-]
This is odd because when I follow your steps up to Step 5, the rectangle that gets cut out from the screenshot is white. I did remember to follow steps 2 and 3.
AdmiralAsshat 1 hours ago [-]
Might've changed in recent versions of Paint if you're on Win 11. It definitely used to take whatever you had as Color 2 as your background.
ZeWaka 55 minutes ago [-]
Still does.
1 hours ago [-]
eviks 1 hours ago [-]
this method looks worse than pixelation/blurry style, those "just" need to be updated to destroy info first instead of faithfully using the original text
MBCook 25 minutes ago [-]
If you REALLY care then replace the real information with fake information and pixelate that.

But most people don’t care enough.

Or I guess you could make a little video of pixelation that you just paste on top so it looks like you pixelated the thing but in reality there’s no correspondence between the original image and what’s on screen.

layer8 1 hours ago [-]
Don’t do this on a PDF document though. ;)
jcul 34 minutes ago [-]
Should be ok if you rasterize the PDF. Run something like pdftotext after to be sure it doesn't have any text.

Or to be safe, print it and scan it, or just take a screenshot.

layer8 29 minutes ago [-]
Testing that it doesn’t have text doesn’t help if the text was a bitmap in the first place.

Normally the use case is that you still want to distribute it as a PDF, usually consisting of many pages, and without loss of quality, so the printing/scanning/screenshotting option may not be very practical.

No, the real solution is to use an editor that allows you to remove text (and/or cut out bitmaps), before you add black rectangles for clarity.

jebarker 1 hours ago [-]
That's going to be a lot of work for a YouTube video though
SoftTalker 1 hours ago [-]
7) Print the screenshot

8) Scan the printed screenshot

genewitch 2 minutes ago [-]
Forgot the wooden table step...
HPsquared 57 minutes ago [-]
Or take a blurry misaligned photo of the screen.
eastbound 54 minutes ago [-]
This. Never give the original file, always take a screenshot of it. If it’s text being blacked out, it can be guessed from the length of words.
bob1029 31 minutes ago [-]
It would seem techniques like this have been used in domains like astronomy for a while.

> The reconstruction of objects from blurry images has a wide range of applications, for instance in astronomy and biomedical imaging. Assuming that the blur is spatially invariant, image blur can be defined as a two-dimensional convolution between true image and a point spread function. Hence, the corresponding deblurring operation is formulated as an inverse problem called deconvolution. Often, not only the true image is unknown, but also the available information about the point spread function is insufficient resulting in an extremely underdetermined blind deconvolution problem. Considering multiple blurred images of the object to be reconstructed, leading to a multiframe blind deconvolution problem, reduces underdeterminedness. To further decrease the number of unknowns, we transfer the multiframe blind deconvolution problem to a compact version based upon [18] where only one point spread function has to be identified.

https://www.mic.uni-luebeck.de/fileadmin/mic/publications/St...

https://en.wikipedia.org/wiki/Blind_deconvolution

42lux 47 minutes ago [-]
Bad blackout jobs are in the news since the 50s and every time an expert tells the same solution. If you want to censor something remove the information.
nightpool 3 minutes ago [-]
Easier said than done if you're using a proportional font though
wlesieutre 53 minutes ago [-]
> If I hadn't moved around my Finder window in the video, I don't think it would've worked. You might get a couple letters right, but it would be very low confidence.

> Moving forward, if I do have sensitive data to hide, I'll place a pure-color mask over the area, instead of a blur or pixelation effect.

Alternately - don't pixelate on a stationary grid when the window moves.

If you want it to look nicer than a color box but without giving away all the extra info when data moves between pixels, pixelate it once and overlay with a static screenshot of that.

For bonus points, you could automate scrambling the pixelation with fake-but-real-looking pixelation. Would be nice if video editing tools had that built in for censoring, knowing that pixelation doesn't work but people will keep thinking it does.

geerlingguy 50 minutes ago [-]
That's another good way to do it.

I wonder if it might be good for the blur/censor tools (like on YouTube's editor even) to do an average color match and then add in some random noise to the area that's selected...

Would definitely save people from some hassle.

wlesieutre 14 minutes ago [-]
The part that might take some work is matching the motion correctly, with a pixelated area or blacked out rectangle it doesn't matter if it's exactly sized or moving pixel perfectly with the window. I haven't done any video editing in 20 years, so maybe that's not very difficult today?

That moving pixelation look is definitely cooler though. If you wanted to keep it without leaking data you could do the motion tracked screenshot step first (not pixelated, but text all replaced by lorem ipsum or similar) and then run the pixelation over top of that.

If any of you nerds reading this are into video editing, please steal this idea and automate it.

its-summertime 2 hours ago [-]
Speaking of, the Lockpicking Lawyer's "Thank you" video https://www.youtube.com/watch?v=CwuEPREECXI always irked me a bit, yeah its blurred, but as can be seen, (and as was possible back then, and way before then too, recovering poor data from windowed input has been a thing for 50+ years (e.g. radio signals, scanning tools, etc), if you think about it, its a cheap way to shift costs from physical improvement to computational improvement, just have a shutter), and yet he didn't block the information out, only blurred it
46 minutes ago [-]
zoky 1 hours ago [-]
I also have a network share named “mercury” connected to my Mac, and that last example nearly made me shit myself.
geerlingguy 57 minutes ago [-]
Ha! I name most of my shares after celestial bodies... Jupiter is the big 100 TB volume for all my archives. Mercury is an all-NVMe volume for speed, for my video editing mostly.
HPsquared 54 minutes ago [-]
I wonder how much random noise (or other randomness) would have to be added to the pixelated version to make this method unusable.
miki123211 49 minutes ago [-]
If you really want that blur effect so badly, you can just replace your content with something innocuous, and then blur that innocuous content.

This is what you actually have to do with websites, e.g. when you want some content blurred when it's behind a paywall. If you leave the original text intact, people can just remove the CSS blur in dev tools.

Some implementations get this slightly wrong, and leave the placeholder content visible to accessibility tools, which sometimes produces hilarious and confusing results if you rely on those.

mikelitoris 48 minutes ago [-]
Does this guy look like Eminem or am I tripping?
brunosutic 2 hours ago [-]
I like this Jeff Geerling guy.
ge96 45 minutes ago [-]
he's like THE or was THE raspberry pi guy
formerly_proven 2 hours ago [-]
> Intuitively, blur might do better than pixelation... but that might just be my own monkey brain talking. I'd love to hear more in the comments if you've dealt with that kind of image processing in the past.

A pixelization filter at least actively removes information from an image, a Gaussian blur or box blurs are straight up invertible by deconvolution and the only reason that doesn't work out of the box is because the blurring is done with low precision (e.g. directly on 8-bit sRGB) or quantized to a low precision format afterwards.

danjl 1 hours ago [-]
Exactly. Do not use blur to hide information. Blurring simply "spreads out" the data, rather than removing it. Just search (you know, on Google, without an LLM) for "image unblur".
1 hours ago [-]
Funes- 1 hours ago [-]
Japanese porn is being "decensored" with AI as we speak, in fact. It looks a tad uncanny, still, but finding a "decensored" clip in the wild was quite the thing for me a couple of weeks ago.
internetter 1 hours ago [-]
This is a completely different process — the AI is inferencing what goes there, it isn't actually using any information from the pixels so it wouldn't work in this case.

Not to mention deeply and disturbingly unethical

gjsman-1000 47 minutes ago [-]
So let me get this straight: Porn can be ethical - selling your nude features online can be ethical - doing the activities in porn consensually can be ethical - pleasuring yourself on other people doing so can be ethical - but using AI to infer nude features is "disturbingly unethical"?
ziddoap 11 minutes ago [-]
>but using AI to infer nude features is "disturbingly unethical"?

If it is against the wishes of the people in the video, yes, yes it is.

1 hours ago [-]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 21:05:44 GMT+0000 (Coordinated Universal Time) with Vercel.