Every photo you send contains your exact GPS location.
Not in the image. Hidden in the file itself.
Camera model. Date. Time. Latitude and longitude. All embedded invisibly in every photo you've ever shared.
Most people have sent thousands of photos without knowing this.
Here's how to stop it in 3 minutes (bookmark this):
Show more
I hope you've found this thread helpful.
Follow me
@HowToAI_ for more.
You've probably sent hundreds of photos with your exact location embedded in the file.
You've probably "deleted" thousands of files that are still sitting on your drive right now.
But now you know. 3 minutes. Free tools. No data leaked again.
Bookmark this. Send it to everyone you care about.
Show more
For the truly cautious:
Encrypt your entire drive. VeraCrypt is free, open-source, and handles full volume encryption.
Even if someone recovers a file, they can't read encrypted data without the key.
And if you're decommissioning a drive? Governments and security organizations consider physical destruction the only foolproof method. Shredding software is for everyday use. A hammer is for when it matters most.
Show more
Now the bigger problem.
When you delete a file, your operating system doesn't erase it. It flags that spot on the drive as "available to be overwritten."
Until something else writes over that exact spot, the file is still there. Fully intact. Recoverable with free software.
Show more
The fix:
Download BleachBit. (Open-source).
It scans for caches, temporary files, logs, and leftover data from dozens of programs. Select what you want gone. Preview the file list. Hit Clean.
Show more
If you're on an iPhone, you don't even need an app. You can build a 3-step Shortcut.
1. Select Photos
2. Convert Image → Uncheck ‘Preserve Metadata’
3. Share Image
Enable 'Show in Share Sheet' and you can strip metadata in one tap before texting anyone.
Takes 2 minutes to set up. Works forever.
Show more
On Android, download an Exif removal app. Several free options exist on the Play Store.
Messaging apps like WhatsApp and Signal strip metadata automatically. But email, AirDrop, and direct file sharing do not.
Know which channels protect you and which don't.
Show more
First, what's actually happening with your photos.
Every time you snap a picture, your phone embeds invisible "EXIF tags" into the file.
It logs the camera model, the exact time, and the precise GPS coordinates of where you took it.
Show more
Every photo you send contains your exact GPS location.
Not in the image. Hidden in the file itself.
Camera model. Date. Time. Latitude and longitude. All embedded invisibly in every photo you've ever shared.
Most people have sent thousands of photos without knowing this.
Here's how to stop it in 3 minutes (bookmark this):
Show more
Someone just built a dead simple App Store screenshot maker that runs 100% in the browser.
just drop in your raw screen, it auto-generates the device frame, dimensions, and text overlays for every phone size the App Store needs.
100% free to try.
Show more
Someone just built a dead simple App Store screenshot maker that runs 100% in the browser.
just drop in your raw screen, it auto-generates the device frame, dimensions, and text overlays for every phone size the App Store needs.
100% free to try.
Show more
NVIDIA has solved the biggest trade-off in LLMs.
And it delivers a 6x speed boost without losing a single point of quality.
Every AI you use today (GPT-4, Claude, Gemini) is "Autoregressive." This means the model is forced to think in a straight line, one token at a time, left-to-right.
It’s like a genius writer who can only type with one finger.
The hardware under the hood, your massive GPU, is actually sitting idle 90% of the time, waiting for that one finger to hit the next key.
NVIDIA published a paper that changes the math.
They figured out how to make the AI do two things at once in a single forward pass.
1. The "Talk" (AR): The model handles the immediate next word with perfect logical precision.
2. The "Think" (Diffusion): While it's talking, it uses its "idle" brainpower to parallel-draft the next 10–20 words in advance.
It’s a hybrid brain.
The results are a massive wake-up call for the industry:
- 6x Speedup: It delivers nearly 600% more tokens per second than standard models.
- Zero Quality Loss: Unlike previous "fast" models that get "blurry" or hallucinate, TiDAR matches the quality of the world’s best LLMs.
- GPU Efficiency: It finally stops wasting the expensive compute power big tech is burning billions on.
We’ve spent years trying to make AI smarter by making it bigger.
But this paper proves that the real bottleneck wasn't the size of the brain, it was how the brain was scheduled.
Paper: TiDAR - Think in Diffusion, Talk in Autoregression, 2025
Show more
NVIDIA has solved the biggest trade-off in LLMs.
And it delivers a 6x speed boost without losing a single point of quality.
Every AI you use today (GPT-4, Claude, Gemini) is "Autoregressive." This means the model is forced to think in a straight line, one token at a time, left-to-right.
It’s like a genius writer who can only type with one finger.
The hardware under the hood, your massive GPU, is actually sitting idle 90% of the time, waiting for that one finger to hit the next key.
NVIDIA published a paper that changes the math.
They figured out how to make the AI do two things at once in a single forward pass.
1. The "Talk" (AR): The model handles the immediate next word with perfect logical precision.
2. The "Think" (Diffusion): While it's talking, it uses its "idle" brainpower to parallel-draft the next 10–20 words in advance.
It’s a hybrid brain.
The results are a massive wake-up call for the industry:
- 6x Speedup: It delivers nearly 600% more tokens per second than standard models.
- Zero Quality Loss: Unlike previous "fast" models that get "blurry" or hallucinate, TiDAR matches the quality of the world’s best LLMs.
- GPU Efficiency: It finally stops wasting the expensive compute power big tech is burning billions on.
We’ve spent years trying to make AI smarter by making it bigger.
But this paper proves that the real bottleneck wasn't the size of the brain, it was how the brain was scheduled.
Paper: TiDAR - Think in Diffusion, Talk in Autoregression, 2025
Show more
Google has quietly dropped what researchers are calling "Attention Is All You Need V2."
And it signals the end of the Transformer era as we know it.
In 2017, the original "Attention Is All You Need" paper changed the world by proving that AI doesn't need recurrence, it just needs to pay attention.
But today, even the most advanced models like GPT and Gemini suffer from a massive, structural flaw: Catastrophic Forgetting.
The moment an AI learns something new, it starts losing what it learned before. It’s why AI "hallucinates" or loses the thread in long conversations.
This paper, titled "Nested Learning: The Illusion of Deep Learning Architectures," completely replaces the way AI stores information.
The researchers have introduced a paradigm shift called Nested Learning (NL).
Here is why this is "V2":
For the last decade, we treated AI models as one giant, flat mathematical function. NL proves that a model is actually a set of thousands of smaller, "nested" optimization problems running in parallel.
Instead of one giant "memory," each layer has its own internal "context flow." This allows the model to learn new tasks at test-time without overwriting its core intelligence.
It moves us past the static Transformer. The new architecture (HOPE) demonstrated 100% stability in long-context memory and "post-training adaptation" that was previously impossible.
The technical takeaway is brutal for the competition:
Existing deep learning works by compressing information until it breaks. Nested Learning works by organizing information so it can grow forever.
We’ve spent 7 years trying to make Transformers bigger. Google figured out how to make them "Nested."
The Transformer replaced the RNN in 2017.
Nested Learning is here to replace the Transformer in 2026.
Show more
Google has quietly dropped what researchers are calling "Attention Is All You Need V2."
And it signals the end of the Transformer era as we know it.
In 2017, the original "Attention Is All You Need" paper changed the world by proving that AI doesn't need recurrence, it just needs to pay attention.
But today, even the most advanced models like GPT and Gemini suffer from a massive, structural flaw: Catastrophic Forgetting.
The moment an AI learns something new, it starts losing what it learned before. It’s why AI "hallucinates" or loses the thread in long conversations.
This paper, titled "Nested Learning: The Illusion of Deep Learning Architectures," completely replaces the way AI stores information.
The researchers have introduced a paradigm shift called Nested Learning (NL).
Here is why this is "V2":
For the last decade, we treated AI models as one giant, flat mathematical function. NL proves that a model is actually a set of thousands of smaller, "nested" optimization problems running in parallel.
Instead of one giant "memory," each layer has its own internal "context flow." This allows the model to learn new tasks at test-time without overwriting its core intelligence.
It moves us past the static Transformer. The new architecture (HOPE) demonstrated 100% stability in long-context memory and "post-training adaptation" that was previously impossible.
The technical takeaway is brutal for the competition:
Existing deep learning works by compressing information until it breaks. Nested Learning works by organizing information so it can grow forever.
We’ve spent 7 years trying to make Transformers bigger. Google figured out how to make them "Nested."
The Transformer replaced the RNN in 2017.
Nested Learning is here to replace the Transformer in 2026.
Show more
OpenAI and Anthropic engineers leaked a prompting technique that separates beginners from experts.
It's not "act as an expert."
It's not "be detailed."
It's not even a prompt at all.
It's a question.
They call it "Socratic prompting" and it takes 10 seconds to learn:
Show more
Someone open-sourced a Claude Code skill that optimizes your entire website for AI search engines.
It's called GEO-SEO. It audits your site, finds why AI models ignore it, and rewrites it to get cited by AI search engines like ChatGPT, Perplexity, and Claude.
→ Works on any website
→ No $5k/month SEO retainers
→ 100% Open Source.
Show more
Someone open-sourced a Claude Code skill that optimizes your entire website for AI search engines.
It's called GEO-SEO. It audits your site, finds why AI models ignore it, and rewrites it to get cited by AI search engines like ChatGPT, Perplexity, and Claude.
→ Works on any website
→ No $5k/month SEO retainers
→ 100% Open Source.
Show more
Someone has open-sourced an entire AI assistant that runs 100% from a USB drive.
Just plug it into any Windows, Mac, or Linux machine and you get an uncensored LLM with zero install, zero internet, zero trace.
No cloud. No data ever leaves the stick.
It's called Portable-AI-USB and it replaces a $20/mo ChatGPT subscription with something nobody can shut off, monitor, or take away.
Here's what's wild about it:
→ Boots Ollama + AnythingLLM directly from the USB on Windows, Mac, and Linux
→ Pick from 6 curated models, including uncensored ones like NemoMix 12B and Dolphin 2.9
→ Or paste any HuggingFace .gguf link and roll your own
→ Zero registry keys, zero local files, zero telemetry, plug it out and the host PC has no idea it ran
→ All chats and settings live on the USB, so the same drive moves between machines
The setup is one click:
1. Copy the repo to a 16GB+ USB (32GB if you want the good model)
2. Format as exFAT
3. Double-click install.bat
4. Pick your model from the menu
5. Done, your private AI lives in your pocket
Here's the wildest part:
It works offline forever after the first download.
No API key. No login. No "we've updated our terms of service." No company can quietly nerf your model or jack the price next quarter.
You own it. On a $5 USB stick.
100% open source. MIT license
Show more