macOS’s Little-Known Command-Line Sandboxing Tool

sandbox-exec: macOS’s Little-Known Command-Line Sandboxing Tool | Igor’s Techno Club sandbox-exec: macOS’s Little-Known Command-Line Sandboxing Tool 17 Apr, 2025 EARLY ACCESS If you want a deeper, up-to-date treatment of sandbox-exec —including modern macOS pitfalls, Mach/XPC failures, and working profiles—I’m documenting it as an extended handbook, sandbox-exec: The Missing Handbook (early access, $12) . What is sandbox-exec ? sandbox-exec is a built-in macOS command-line utility that enables users to execute applications within a sandboxed environment. In essence, it creates a secure, isolated space where applications can run with limited access to system resources – only accessing what you explicitly permit. The concept behind sandboxing is fundamental to modern security: by restricting what an application can access, you minimize the potential damage from malicious code or unintended behavior. Think of it as putting an application in a secure room where it can only interact with specific objects you’ve placed there. Benefits of Application Sandboxing Before diving into usage, let’s understand why sandboxing matters: Protection from malicious code : If you’re testing an unfamiliar application or script, sandboxing can prevent it from accessing sensitive files or sending data across the network. Damage limitation : Even trusted applications can have vulnerabilities. Sandboxing limits the potential impact if an application is compromised. Privacy control : You can explicitly deny applications access to personal directories like Documents, Photos, or Contacts. Testing environment : Developers can test how applications function with limited permissions before implementing formal App Sandbox entitlements. Resource restriction : Beyond security, sandboxing can limit an application’s resource consumption or network access. Getting Started with sandbox-exec Using sandbox-exec requires creating a sandbox profile (configuration file) that defines the rules for your secure environment. The basic syntax is: sandbox-exec -f profile.sb command_to_run Where profile.sb contains the rules defining what the sandboxed application can and cannot do, and command_to_run is the application you want to run within those constraints. Understanding Sandbox Profiles Sandbox profiles use a Scheme-like syntax (a LISP dialect) with parentheses grouping expressions. The basic structure includes: A version declaration: (version 1) Default policy: (deny default) or (allow default) Specific rules allowing or denying operations Rules can target specific resources using: Literal paths: (literal “/path/to/file”) Regular expressions: (regex “^/System”) Glob patterns: (subpath “/Library”) See Appendix for more complete list of available rules Two Fundamental Approaches to Sandboxing There are two primary philosophies when creating sandbox profiles: 1. Deny by Default (Most Secure) This approach starts by denying everything and explicitly allowing only required operations: (version 1

Source: Hacker News | Original Link

I Verified My LinkedIn Identity. Here’s What I Handed Over

I Verified My LinkedIn Identity. Here’s What I Actually Handed Over. | THE LOCAL STACK I Verified My LinkedIn Identity. Here’s What I Actually Handed Over. Feb 16, 2026 10 min read privacy, linkedin, biometrics, gdpr, cloud-act, identity I wanted the blue checkmark on LinkedIn. The one that says “this person is real.” In a sea of fake recruiters, bot accounts, and AI-generated headshots, it seemed like a smart thing to do. So I tapped “verify.” I scanned my passport. I took a selfie. Three minutes later — done. Badge acquired. I felt a tiny dopamine hit of legitimacy. Then I did what apparently nobody does. I went and read the privacy policy and terms of service. Not LinkedIn’s. The other company’s. Wait, What Other Company? When you click “verify” on LinkedIn, you’re not giving your passport to LinkedIn. You get redirected to a company called Persona . Full name: Persona Identities, Inc. Based in San Francisco, California. LinkedIn is their client. You are the face being scanned. I had never heard of Persona before this. Most people haven’t. That’s kind of the point — they sit invisibly between you and the platforms you trust. So I downloaded their privacy policy (18 pages) and their terms of service (16 pages). Here’s what I found. Everything I Gave Them For a three-minute identity check, this is what Persona collected: My full name — first, middle, last My passport photo — the full document, both sides, all data on the face of it My selfie — a photo of my face taken in real-time My facial geometry — biometric data extracted from both images, used to match the selfie to the passport My NFC chip data — the digital info stored on the chip inside my passport My national ID number My nationality, sex, birthdate, age My email, phone number, postal address My IP address, device type, MAC address, browser, OS version, language My geolocation — inferred from my IP And then there’s the weird stuff: Hesitation detection — they tracked whether I paused during the process Copy and paste detection — they tracked whether I was pasting information instead of typing it Behavioral biometrics. On top of the physical biometrics. For a LinkedIn badge. They Also Called Their Friends Persona didn’t just use what I gave them. They went and cross-referenced me against what they call their “global network of trusted third-party data sources”: Government databases National ID registries Consumer credit agencies Utility companies Mobile network providers Postal address databases I scanned my passport for a checkmark. They ran a background check. My Face Is Training Data Here’s something I almost missed. Buried in a table on page 6 of the privacy policy, under “legitimate interests”: They use uploaded images of identity documents — that’s my passport — to train their AI . They’re teaching their system to recognize what passports look like in different countries. They also use your selfie to “identify improvements in the Service.” The legal basis? Not consent. Legitimate i

Source: Hacker News | Original Link

abhigyanpatwari/GitNexus – GitNexus: The Zero-Server Code Intelligence Engine – GitNexus is a client-side knowledge graph creator that runs entirely in your browser. Drop in a GitHub repo or ZIP file, and get an interactive knowledge graph wit a built in Graph RAG Agent. Perfect for code exploration

GitHub – abhigyanpatwari/GitNexus: GitNexus: The Zero-Server Code Intelligence Engine – GitNexus is a client-side knowledge graph creator that runs entirely in your browser. Drop in a GitHub repo or ZIP file, and get an interactive knowledge graph wit a built in Graph RAG Agent. Perfect for code exploration Skip to content You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert abhigyanpatwari / GitNexus Public Uh oh! There was an error while loading. Please reload this page . Notifications You must be signed in to change notification settings Fork 76 Star 809 GitNexus: The Zero-Server Code Intelligence Engine – GitNexus is a client-side knowledge graph creator that runs entirely in your browser. Drop in a GitHub repo or ZIP file, and get an interactive knowledge graph wit a built in Graph RAG Agent. Perfect for code exploration gitnexus.vercel.app License View license 809 stars 76 forks Branches Tags Activity Star Notifications You must be signed in to change notification settings abhigyanpatwari/GitNexus main Branches Tags Go to file Code Open more actions menu Folders and files Name Name Last commit message Last commit date Latest commit History 143 Commits 143 Commits .claude .claude .github .github .sisyphus/ drafts .sisyphus/ drafts eval eval gitnexus-claude-plugin gitnexus-claude-plugin gitnexus-cursor-integration gitnexus-cursor-integration gitnexus-test-setup gitnexus-test-setup gitnexus-web gitnexus-web gitnexus gitnexus .cursorrules .cursorrules .gitignore .gitignore .mcp.json .mcp.json .windsurfrules .windsurfrules AGENTS.md AGENTS.md CLAUDE.md CLAUDE.md LICENSE LICENSE README.md README.md package-lock.json package-lock.json View all files Repository files navigation GitNexus Building git for agent context. Indexes any codebase into a knowledge graph — every dependency, call chain, cluster, and execution flow — then exposes it through smart tools so AI agents never miss code. Gitnexus_CLI.1.mp4 Like DeepWiki, but deeper. DeepWiki helps you understand code. GitNexus lets you analyze it — because a knowledge graph tracks every relationship, not just descriptions. TL;DR: The Web UI is a quick way to chat with any repo. The CLI + MCP is how you make your AI agent actually reliable — it gives Cursor, Claude Code, and friends a deep architectural view of your codebase so they stop missing dependencies, breaking call chains, and shipping blind edits. Even smaller models get full architectural clarity, making it compete with goliath models. Two Ways to Use GitNexus CLI + MCP Web UI What Index repos locally, connect AI agents via MCP Visual graph explorer + AI chat in browser For Daily development with Cursor, Claude Code, Windsurf, OpenCode Quick exploration, demos, one-off analysis Scale Full repos, any size Limited by browser memory (~5k files) Install npm in

Source: GitHub Trending | Original Link

Andrej Karpathy talks about “Claws”

Andrej Karpathy talks about “Claws” Simon Willison’s Weblog Subscribe Sponsored by: Teleport — Secure, Govern, and Operate AI at Engineering Scale. Learn more 21st February 2026 Andrej Karpathy talks about “Claws” . Andrej Karpathy tweeted a mini-essay about buying a Mac Mini (“The apple store person told me they are selling like hotcakes and everyone is confused”) to tinker with Claws: I’m definitely a bit sus’d to run OpenClaw specifically […] But I do love the concept and I think that just like LLM agents were a new layer on top of LLMs, Claws are now a new layer on top of LLM agents, taking the orchestration, scheduling, context, tool calls and a kind of persistence to a next level. Looking around, and given that the high level idea is clear, there are a lot of smaller Claws starting to pop out. For example, on a quick skim NanoClaw looks really interesting in that the core engine is ~4000 lines of code (fits into both my head and that of AI agents, so it feels manageable, auditable, flexible, etc.) and runs everything in containers by default. […] Anyway there are many others – e.g. nanobot, zeroclaw, ironclaw, picoclaw (lol @ prefixes). […] Not 100% sure what my setup ends up looking like just yet but Claws are an awesome, exciting new layer of the AI stack. Andrej has an ear for fresh terminology (see vibe coding , agentic engineering ) and I think he’s right about this one, too: ” Claw ” is becoming a term of art for the entire category of OpenClaw-like agent systems – AI agents that generally run on personal hardware, communicate via messaging protocols and can both act on direct instructions and schedule tasks. It even comes with an established emoji 🦞 Posted 21st February 2026 at 12:37 am Recent articles Adding TILs, releases, museums, tools and research to my blog – 20th February 2026 Two new Showboat tools: Chartroom and datasette-showboat – 17th February 2026 Deep Blue – 15th February 2026 This is a link post by Simon Willison, posted on 21st February 2026 . definitions 47 ai 1862 andrej-karpathy 38 generative-ai 1649 llms 1614 ai-agents 106 openclaw 6 Monthly briefing Sponsor me for $10/month and get a curated email digest of the month’s most important LLM developments. Pay me to send you less! Sponsor & subscribe Disclosures Colophon © 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026

Source: Hacker News | Original Link

Acme Weather

Introduction – Acme Weather Introducing Acme Weather Adam Grossman February 16, 2026 Fifteen years ago, we started work on the Dark Sky weather app. Over the years it went through numerous iterations — including more than one major redesign — as we worked our way through the process of learning what makes a great weather app. Eventually, in time, it was acquired by Apple, where the forecast and some core features were incorporated into Apple Weather. We enjoyed our time at Apple. So why did we leave to start another weather company? It’s simple: when looking at the landscape of the countless weather apps out there, many of them lovely, we found ourselves feeling unsatisfied. The more we spoke to friends and family, the more we heard that many of them did too. And, of course, we missed those days as a small scrappy shop. So let’s try this again… Embracing Uncertainty Our biggest pet peeve with most weather apps is how they deal (or rather, don’t deal) with forecast uncertainty. It is a simple fact that no weather forecast will ever be 100% reliable: the weather is moody, fickle, and chaotic. Forecasts are often wrong. Understanding this uncertainty is crucial for planning your day. Most weather apps will give you their single best guess, leaving you to wonder how sure they actually are, and what else might happen instead. Will it actually start raining at 9am, or might it end up pushed off until noon? Will there be rain or snow? How sure are you? You can’t plan your day if you don’t know how much you can trust the forecast, or know what other possibilities might arise. Rather than pretending we will always be right, Acme Weather embraces the idea that our forecast will sometimes be wrong. We address this uncertainty in several ways: Alternate Possible Futures Our homegrown forecasts are produced using many different data sources, including numerical weather prediction models, satellite data, ground station observations, and radar data. Most of the time, our forecast will be a reliable source of information (it’s better than the one we had at Dark Sky). But, crucially, we supplement the main forecast with a spread of alternate predictions. These are additional forecast lines that capture a range of alternate possible outcomes: A forecast showing multiple possible outcomes This accomplishes a couple things: First, the spread of the lines offers a sort of intuition as to how reliable the forecast is. Take the two forecasts below. In the first, the alternate predictions are tightly focused and the forecast can be considered robust and reliable. In the second, there is a significant spread, which is an indication that something is up and the forecast may be subject to change. It’s a call to action to check other conditions or maps, or come back to the app more frequently: A more reliable forecast A less reliable forecast Over time, you build up an intuitive sense of just how much you can actually trust the forecast. After using this for the past six mo

Source: Hacker News | Original Link

24 Hour Fitness Won’t Let You Unsubscribe From Marketing Spam, So I Fixed It

24 Hour Fitness Won’t Let You Unsubscribe From Marketing Spam, So I Fixed It – Ahmed Kaddoura Skip to content 24 Hour Fitness has a broken unsubscribe page. You get one of their marketing emails. You click the unsubscribe link at the bottom. It takes you here: https://www.24hourfitness.com/members/unsubscribe You enter your email. You click unsubscribe. You get a mysterious error message in Spanish. I found the bug. It’s one line of JavaScript. I reported it back in November 2025. No response. So I built my own unsubscribe page. Getting spammed by 24 Hour Fitness marketing emails? Unsubscribe Now → “The audacity of a Spanish error message on a US gym website.” — Claude What the heck is this? 🤔 Error de conexión al obtener el token de OneTrust. OneTrust is an American software company that develops privacy, security, and data governance software. Their platform includes tools for consent management and regulatory compliance automation. The irony: OneTrust is literally a consent management platform focused on regulatory compliance, and 24 Hour Fitness is using it to violate consent regulations. The error is in Spanish for some reason. This is actually illegal The CAN-SPAM Act requires commercial emails to have a working opt-out mechanism. Companies that violate this face serious fines: Verkada: $2.95 million (2024) – the largest CAN-SPAM penalty ever. They ignored opt-out requests. Jumpstart Technologies: $900,000 (2006) – didn’t process opt-out requests in time. Experian: $650,000 (2023) – spammed users with emails they couldn’t opt out of. Each individual email can carry a penalty of up to $53,088. Marketing email = psychic attack I don’t subscribe to anything. Not newsletters. Not Substacks. Not even blogs from writers I deeply care about. My inbox is for communication, not marketing. I’m definitely not subscribing to 24 Hour Fitness marketing spam. Since October 2025, I’ve received 40 marketing emails. Every single one links to the same broken unsubscribe page. Each of these emails is a psychic attack. An attack on my attention. Here are the subject lines: $20 off BUM Energy Cases ⚡️ I don’t know what BUM energy is. I just want to lift. 3 tips for healthier holiday eating I don’t want tips. I just want to lift. 30% off creatine to power your holidays 💪 I already have creatine. I just want to lift. Ahmed, find workouts made for you I already have workouts. I just want to lift. Best Personal Training Offer of the Year I don’t want personal training. I just want to lift. Bring a Guest This Friday for Free I already have free guest passes. I just want to lift. Bring your workout buddy for free 💪 I already have free guest passes. I just want to lift. Celebrating Coach Joshua 🎉 I don’t know Coach Joshua. I just want to lift. Celebrating Coach Karen 🎉 I don’t know Coach Karen. I just want to lift. Don’t Miss Out: Sessions As Low as $55.08 🔥 I just want to lift. Feel Stronger Every Time with HIIT24™ I just want to lift. Find your community in our group

Source: Hacker News | Original Link

The Evolution of x86 SIMD: From SSE to AVX-512

BGs Labs Skip to content The story of x86 SIMD is simply not about technology. It’s about marketing, corporate politics , engineering compromises, competitive pressure. This is the behind-the-scenes history of how Intel and AMD battled for vector supremacy, the controversial decisions that defined an architecture, and the personalities who made it happen. Part I: The Not-So Humble Beginnings (1993-1999) The MMX Gamble: Intel’s Israel Team Takes a Huge Risk The story of MMX begins not in Santa Clara, but in Haifa, Israel . In 1993, Intel made an unprecedented decision: they would let their Israel Development Center design and build a mainstream microprocessor, the Pentium MMX, the first time Intel developed a flagship processor outside the United States . 1 This was a massive gamble. According to Intel’s own technology journal, the development of MMX technology spanned five years and involved over 300 engineers across four Intel sites. At the center of this effort was Uri Weiser , director of the Architecture group at the IDC in Haifa. 1 2 Uri Weiser later recalled the struggle with characteristic understatement: “Some people were ready to quit,” He was named an Intel Fellow for his work on MMX architecture, a rare honor that speaks to the significance of what the Israel team accomplished. 1 Meanwhile, in Haifa, 300 engineers were about to make a decision that would haunt x86 for the next three decades. The Technical Reason for the Controversial Register Decision Here is where things get spicy. The most consequential and controversial decision in MMX design was register aliasing . Intel aliased the 8 new MMX registers (MM0-MM7) directly onto the existing x87 floating-point register stack (ST(0)-ST(7)). 3 Why they did this : To avoid adding new processor state. At the time, operating systems only knew how to save/restore the x87 FPU registers during context switches. Adding 8 entirely new registers would have required OS modifications across Windows, Linux, and every other x86 OS. This was the 1990s, remember, convincing Microsoft to change Windows was roughly as easy as convincing your cat to enjoy water sports. The cost : You cannot mix floating-point and MMX instructions in the same routine without risking register corruption. Programmers must use the EMMS (Empty MMX State) instruction to switch between modes, and even then, there’s overhead. 4 Think of it like sharing a closet with your neighbor: sure, it saves space, but good luck finding your socks when they’ve mysteriously migrated to the other person’s side. The register state mapping can be expressed as: $$ \forall i \in {0,\dots,7}: \text{MM}_i \equiv \text{ST}(i) $$ where $\equiv$ denotes hardware-level aliasing (same physical storage). Intel’s engineers knew this was a compromise. But they made a calculated bet: most multimedia applications separate data generation (FP) from display (SIMD), so the restriction would rarely matter in practice. They were mostly right. Mostly… The “MMX” Nam

Source: Hacker News | Original Link

Meta Deployed AI and It Is Killing Our Agency

Meta Deployed AI and It Is Killing Our Agency – Mojo Dojo Meta Deployed AI and It Is Killing Our Agency By Ajay Chavda Reading Time 3–4 minutes We manage millions of dollars in annual Meta ad spend. Not thousands. Millions. Our retail clients grow their businesses through Meta Ads, and for a lot of them, it’s their single most important growth channel. We are, by any reasonable definition, a high-value customer. And yet, for the past several months, Meta has been treating us like we don’t exist. Here’s what’s been happening, because it’s genuinely one of the more absurd things we’ve experienced running an agency in two decades. The loop goes like this. We hire a senior Paid Ads Specialist. They set up a dedicated work account, which, by the way, is standard professional practice. Keeping work and personal accounts separate is basic data hygiene, not a red flag. We upload their government ID for mandatory identity verification. Then, somewhere between five minutes and ten hours later, the account gets instantly banned. We have done this with multiple specialists and social media managers now. Every single one banned. Before they’ve even opened an ad account or posted a single piece of content. We have done this with multiple specialists and social media managers now. Every single one banned. Before they’ve even opened an ad account or posted a single piece of content. Now, we get it. Reading this, you might be thinking there’s something suspicious going on. Maybe a pattern in the logins. Maybe something in the account setup that looks off. Maybe we’re leaving something out. It’s a fair reaction, Meta’s security systems exist for a reason, and when someone says “we keep getting banned,” the natural assumption is that they’re doing something to deserve it. But here’s the thing. We have been advertising on Facebook since the platform first opened its doors to advertisers, back in 2008. We were there at the beginning. We have spent millions of dollars on it over that time. Our agency history, our billing history, our business identity – it’s all there. There is no shady pattern. There is no hidden behaviour. There is just a broken automated system that cannot distinguish between a bot farm and a professional who created a work account on their first day. And when we try to fix it? That’s where it gets truly circular. Meta’s standard response is to file an appeal through the Account Quality dashboard. Sounds reasonable, until you realise that the appeal tool is inside the platform – the same platform the specialist is completely locked out of. You cannot appeal a login ban from behind a login screen. We’ve tried everything. Every forum thread, every concierge support contact, every support line we could find. The answers we get back are remarkably consistent: “just create a new account” or “file an appeal.” So we create a new account. That one gets banned too – often faster than the first. There is no clean slate. There is just the same broken automate

Source: Hacker News | Original Link

What Is OAuth?

What is OAuth? What is OAuth? Wherein I [try to] answer a seemingly straightforward question: “WTF is OAuth, anyhow?” Blaine February 21, 2026 4 10 0 @geoffreylitt.com recently asked a question about OAuth on dead-Twitter: I desperately need a Matt Levine style explanation of how OAuth works. What is the historical cascade of requirements that got us to this place? There are plenty of explanations of the inner mechanical workings of OAuth, and lots of explanations about how various flows etc work, but Geoffrey is asking a different question : What I need is to understand why it is designed this way , and to see concrete examples of use cases that motivate the design In the 19 years (!) since I wrote the first sketch of an OAuth specification, there has been a lot of minutiae and cruft added, but the core idea remains the same. Thankfully, it’s a very simple core. Geoffrey’s a very smart guy, and the fact that he’s asking this question made me think it’s time to write down an answer to this. It’s maybe easiest to start with the Sign-In use-case, which is a much more complicated specification ( OpenID Connect ) than core OAuth . OIDC uses OAuth under the hood, but helps us get to the heart of what’s actually happening. OIDC is functionally equivalent to “magic link” authentication. We send a secret to a place that only the person trying to identify themselves can access, and they prove that they can access that place by showing us the secret. That’s it. The rest is just accumulated consensus, in part bikeshedding (agreeing on vocabulary, etc), part UX, and part making sure that all the specific mechanisms are secure. There’s also an historical reason to start with OIDC to explain how all this works: in late 2006, I was working on Twitter, and we wanted to support OpenID (then 1.0) so that ahem Twitter wouldn’t become a centralized holder of online identities. After chatting with the OpenID folks, we quickly realized that as it was constructed, we wouldn’t be able to support both desktop clients and web sign-in, since our users wouldn’t have passwords anymore! (mobile apps didn’t exist yet, but weren’t far out). So, in order to allow OpenID sign-in, we needed a way for folks using Twitter via alternative clients to sign in without a password. There were plenty of solutions for this; Flickr had an approach, AWS had one, delicious had one, lots of sites just let random other apps sign-in to your account with your password, etc, but virtually every site in the “Web 2.0” cohort needed a way to do this. They were all insecure and all fully custom. Rather than building TwitterAuth, I figured it was time to have a standard. Insert XKCD 927: Standards Fortunately, the charging one has been solved now that we've all standardized on mini-USB. Or is it micro-USB? Shit. https://xkcd.com/927/ Thankfully, against all odds, we now have one standard for delegated auth. What it does is very simple: At its core, OAuth for delegation is a standard way to do the fo

Source: Hacker News | Original Link

CERN rebuilt the original browser from 1989 (2019)

CERN 2019 WorldWideWeb Rebuild Hello, World In December 1990, an application called WorldWideWeb was developed on a NeXT machine at The European Organization for Nuclear Research (known as CERN) just outside of Geneva. This program – WorldWideWeb — is the antecedent of most of what we consider or know of as “the web” today. In February 2019, in celebration of the thirtieth anniversary of the development of WorldWideWeb, a group of developers and designers convened at CERN to rebuild the original browser within a contemporary browser, allowing users around the world to experience the rather humble origins of this transformative technology. This project was supported by the US Mission in Geneva through the CERN & Society Foundation. Party like it’s 1989 Ready to browse the World Wide Web using WorldWideWeb? Launch the WorldWideWeb browser . Select “Document” from the menu on the side. Select “Open from full document reference”. Type a URL into the “reference” field. Click “Open”. Click here to jump in (and remember you need to double-click on links): Launch WorldWideWeb How To Open a URL How to open a URL using the original NeXT browser How to edit a document and make a link How to edit a document and make a link using the original NeXT browser Contents History — a brief history of the application which was built in 1989 as a progenitor to what we know as “the web” today. Timeline — a timeline of the thirty years of influences leading up to (and the thirty years of influence leading out from) the publication of the memo that lead to the development of the first web browser. The Browser — instructions for using the recreated WorldWideWeb browser, and a collection of its interface patterns. Typography — details of the NeXT computer’s fonts used by the WorldWideWeb browser. Inside the Code — a look at some of the original code of WorldWideWeb. Production Process — a behind the scenes look at how the WorldWideWeb browser was rebuilt for today. Related Links — links to additional historical and technical resources around the production of WorldWideWeb. Colophon — a bit of info about the folks behind the project.

Source: Hacker News | Original Link

CERN rebuilt the original browser from 1989

CERN 2019 WorldWideWeb Rebuild Hello, World In December 1990, an application called WorldWideWeb was developed on a NeXT machine at The European Organization for Nuclear Research (known as CERN) just outside of Geneva. This program – WorldWideWeb — is the antecedent of most of what we consider or know of as “the web” today. In February 2019, in celebration of the thirtieth anniversary of the development of WorldWideWeb, a group of developers and designers convened at CERN to rebuild the original browser within a contemporary browser, allowing users around the world to experience the rather humble origins of this transformative technology. This project was supported by the US Mission in Geneva through the CERN & Society Foundation. Party like it’s 1989 Ready to browse the World Wide Web using WorldWideWeb? Launch the WorldWideWeb browser . Select “Document” from the menu on the side. Select “Open from full document reference”. Type a URL into the “reference” field. Click “Open”. Click here to jump in (and remember you need to double-click on links): Launch WorldWideWeb How To Open a URL How to open a URL using the original NeXT browser How to edit a document and make a link How to edit a document and make a link using the original NeXT browser Contents History — a brief history of the application which was built in 1989 as a progenitor to what we know as “the web” today. Timeline — a timeline of the thirty years of influences leading up to (and the thirty years of influence leading out from) the publication of the memo that lead to the development of the first web browser. The Browser — instructions for using the recreated WorldWideWeb browser, and a collection of its interface patterns. Typography — details of the NeXT computer’s fonts used by the WorldWideWeb browser. Inside the Code — a look at some of the original code of WorldWideWeb. Production Process — a behind the scenes look at how the WorldWideWeb browser was rebuilt for today. Related Links — links to additional historical and technical resources around the production of WorldWideWeb. Colophon — a bit of info about the folks behind the project.

Source: Hacker News | Original Link

Be Wary of Bluesky

Be Wary of Bluesky Be Wary of Bluesky atproto open-protocols decentralization In 2023, Bluesky’s CTO Paul Frazee was asked what would happen if Bluesky ever turned against its users. His answer: “it would look something like this: bluesky has gone evil. there’s a new alternative called freesky that people are rushing to. I’m switching to freesky” That’s the same argument people made about Twitter. “If it goes bad, we’ll just leave.” We know how that played out. The promise Bluesky is built on ATProto, an open protocol. The pitch is simple: your data is yours, your identity is yours, and if you don’t like what Bluesky is doing, you can take everything and leave. Apps like Tangled (git hosting), Grain (photos), and Leaflet (publishing) all plug into the same protocol. One account, many apps, no lock-in. It sounds great. But look closer. Where your data actually lives When you use any ATProto app, it writes data to your Personal Data Server, or PDS. Your Bluesky posts, your Tangled issues, your Leaflet publications, your Grain photos. All of it goes to the same place. For almost every user, that place is a server run by Bluesky. You can self-host a PDS. Almost nobody does. Why would they? Bluesky’s PDS works out of the box with every app, zero setup, zero maintenance. Self-hosting means running a server, keeping it online, and gaining nothing in return. To be fair, migration tools exist. You can move your account to a self-hosted PDS for as little as $5 a month. Bluesky has made this easier over time and even supports moving back. But this only works if you do it before the door closes. If an acquirer disables exports, it doesn’t matter that the tools existed yesterday. And we know from every platform transition in history that almost nobody takes proactive steps to protect their data. The flywheel Here’s the part that worries me. Every new ATProto app makes this problem worse, not better. Each app tells you “sign in with your Bluesky account”, which really means “write more data to Bluesky’s servers.” The more apps that launch, the more users depend on Bluesky’s infrastructure, the less reason anyone has to leave. The protocol doesn’t distribute value across the network. It concentrates it. Developers are building features on top of Bluesky’s infrastructure for free, making it more indispensable with every app that ships. And Bluesky gets to claim the moral high ground the whole time. “We’re open! We’re decentralized! You can leave whenever you want!” Meanwhile, the switching cost goes up every day. The chokepoints It’s not just the PDS. Bluesky controls almost every critical layer: The Relay. All data flows through it. Bluesky runs the dominant one. Whoever controls the relay controls what gets seen, hidden, or deprioritized. Third parties can run their own, but without the users, it doesn’t matter. The AppView. This is what assembles your timeline, threads, and notifications. Bluesky runs the main one. If it goes down or goes hostile, every clien

Source: Hacker News | Original Link

Facebook is cooked

PILK #3 | Facebook is absolutely cooked Facebook is absolutely cooked And I don’t just mean that nobody uses it anymore. Like, I knew everyone under 50 had moved on, but I didn’t realize the extent of the slop conveyor belt that’s replaced us. I logged on for the first time in ~8 years to see if there was a group for my neighborhood (there wasn’t). Out of curiosity I thought I’d scroll a bit down the main feed. The first post was the latest xkcd (a page I follow). The next ten posts were not by friends or pages I follow. They were basically all thirst traps of young women, mostly AI-generated, with generic captions. Here’s a sampler — mildly NSFW, but I did leave out a couple of the lewder ones: Click to show mildly sensitive content (revealing clothing) Yikes. Again, I don’t follow any of these pages. This is all just what Facebook is pushing on me. I know Twitter/X has worse problems with spam bots in the replies, but this is the News Feed ! It’s the main page of the site! It’s the product that defined modern social media! It wasn’t all like that, though. There was also an AI video of a policeman confiscating a little boy’s bike, only to bring him a brand new one: And there were some sloppy memes and jokes, mostly about relationships, like this (admittedly not AI) video sketch where a woman decides to intentionally start a fight with her boyfriend because she’s on her period: Maybe that isn’t literally about sex, but I’d classify it as the same sort of lizard-brain-rot engagement bait as those selfies. Meta even gives us some helpful ideas for sexist questions we can ask their AI about the video: Yep, that’s another “yikes” from me. To be fair, though, sometimes that suggested questions feature is pretty useful! Like with this post, for example: Why is she wearing pink heels? What is her personality? Great questions, Meta. I said these were “mostly” AI-generated. The truth is with how good the models are getting these days, it’s hard to tell, and I think a couple of them might be real people. Still, some of these are pretty obviously AI. Here’s one with a bunch of alien text and mangled logos on the scoreboard in the background: Hmm, I wonder if anyone has noticed this is AI? Let’s check out the comments and see if anyone’s pointed that ou— …never mind. (I dunno, maybe those are all bots too.) So: is this just something wacky with my algorithm? I mean… maybe? That’s part of the whole thing with these algorithmic feeds; it’s hard to know if anyone else is seeing what I’m seeing. On the one hand, I doubt most (straight) women’s feeds would look like this. But on the other hand, I hadn’t logged in in nearly a decade! I hate to think what the feed looks like for some lonely old guy who’s been scrolling the lightly-clothed AI gooniverse for hours every day. Did everyone but me know it was like this? I’d seen screencaps of stuff like the Jesus-statue-made-out-of-broccoli slop a year or two ago, but I thought that only happened to grandmas. I hadn

Source: Hacker News | Original Link

香港长江和记最新发声

香港长江和记最新发声_腾讯新闻 香港长江和记最新发声 环球时报新媒体 2026-02-20 17:26 发布于 北京 环球时报新媒体官方账号 据法新社2月19日报道,总部位于香港的长江和记实业有限公司方面称,该公司19日向巴拿马政府提出请求,希望就该公司继续运营巴拿马运河两端港口事宜进行谈判。 上月,巴拿马最高法院裁定长和集团持有巴拿马运河两端港口运营权违宪。随后,巴拿马宣布丹麦马士基航运集团将接管运河两端港口的运营。长和集团表示将就此采取法律行动。 运营这两处港口的长江和记子公司的发言人亚历杭德罗·库鲁克利斯在接受媒体访问时说:“我们要求举行一次长和集团与巴拿马行政部门代表的圆桌会议,以寻求合理的解决方案。” 库鲁克利斯表示,该公司愿意就合同中的所有条款重新谈判。 自1997年以来,长江和记一直管理着巴拿马运河大西洋一侧的克里斯托瓦尔港和太平洋一侧的巴尔博亚港。2021年,这一特许经营权被延长了25年。 就巴拿马最高法院裁定巴拿马政府与和记港口巴拿马港口公司续约经营的两个巴拿马港口违宪一事,香港特别行政区政府日前表示强烈不满及坚决反对。 中国外交部发言人曾表示,有关企业已经第一时间发表声明,表示该裁决有悖于巴拿马方面批准相关特许经营权的法律,企业将保留包括诉诸法律程序在内的所有权利。中方将采取一切必要措施,坚决维护中方企业正当合法权益。 来源 | 参考消息 审核 | 李剑 编辑 | 徐璐明 校对 | 向歆悦

Source: Tencent News | Original Link

Turn Dependabot Off

Turn Dependabot Off 20 Feb 2026 Turn Dependabot Off Dependabot is a noise machine. It makes you feel like you’re doing work, but you’re actually discouraging more useful work. This is especially true for security alerts in the Go ecosystem. I recommend turning it off and replacing it with a pair of scheduled GitHub Actions, one running govulncheck, and the other running your test suite against the latest version of your dependencies. A little case study On Tuesday, I published a security fix for filippo.io/edwards25519 . The (*Point).MultiScalarMult method would produce invalid results if the receiver was not the identity point. A lot of the Go ecosystem depends on filippo.io/edwards25519, mostly through github.com/go-sql-driver/mysql (228k dependents only on GitHub). Essentially no one uses (*Point).MultiScalarMult . Yesterday, Dependabot opened thousands of PRs against unaffected repositories to update filippo.io/edwards25519. These PRs were accompanied by a security alert with a nonsensical, made up CVSS v4 score and by a worrying 73% compatibility score , allegedly based on the breakage the update is causing in the ecosystem. Note that the diff between v1.1.0 and v1.1.1 is one line in the method no one uses . We even got one of these alerts for the Wycheproof repository, which does not import the affected filippo.io/edwards25519 package at all . Instead, it only imports the unaffected filippo.io/edwards25519/field package. $ go mod why -m filippo.io/edwards25519 # filippo.io/edwards25519 github.com/c2sp/wycheproof/tools/twistcheck filippo.io/edwards25519/field We have turned Dependabot off. Use a serious vulnerability scanner instead But isn’t this toil unavoidable, to prevent attackers from exploiting old vulnerabilities in your dependencies? Absolutely not! Computers are perfectly capable of doing the work of filtering out these irrelevant alerts for you. The Go Vulnerability Database has rich version, package, and symbol metadata for all Go vulnerabilities. Here’s the entry for the filippo.io/edwards25519 vulnerability , also available in standard OSV format . modules: – module: filippo.io/edwards25519 versions: – fixed: 1.1.1 vulnerable_at: 1.1.0 packages: – package: filippo.io/edwards25519 symbols: – Point.MultiScalarMult summary: Invalid result or undefined behavior in filippo.io/edwards25519 description: |- Previously, if MultiScalarMult was invoked on an initialized point who was not the identity point, MultiScalarMult produced an incorrect result. If called on an uninitialized point, MultiScalarMult exhibited undefined behavior. cves: – CVE-2026-26958 credits: – shaharcohen1 – WeebDataHoarder references: – advisory: https://github.com/FiloSottile/edwards25519/security/advisories/GHSA-fw7p-63qq-7hpr – fix: https://github.com/FiloSottile/edwards25519/commit/d1c650afb95fad0742b98d95f2eb2cf031393abb source: id: go-security-team created: 2026-02-17T14:45:04.271552-05:00 review_status: REVIEWED Any decent vulnerability scanner will at the

Source: Hacker News | Original Link

I found a Vulnerability. They found a Lawyer

I found a Vulnerability. They found a Lawyer. | Blog | Yannick Dixken [email protected]:~/blog/i-found-a-vulnerability-they-found-a-lawyer [home] cd .. I’m a diving instructor. I’m also a platform engineer who spends lots of his time thinking about and implementing infrastructure security. Sometimes those two worlds collide in unexpected ways. A Sula sula (Frigatebird) and a dive flag on the actual boat where I found the vulnerability – somewhere off Cocos Island. While on a 14 day-long dive trip around Cocos Island in Costa Rica, I stumbled across a vulnerability in the member portal of a major diving insurer – one that I’m personally insured through. What I found was so trivial, so fundamentally broken, that I genuinely couldn’t believe it hadn’t been exploited already. I disclosed this vulnerability on April 28, 2025 with a standard 30-day embargo period. That embargo expired on May 28, 2025 – over eight months ago . I waited this long to publish because I wanted to give the organization every reasonable opportunity to fully remediate the issue and notify affected users. The vulnerability has since been addressed, but to my knowledge, I have not received confirmation that affected users were notified. I have reached out to the organization to ask for clarification on this matter. This is the story of what happened when I tried to do the right thing. The Vulnerability To understand why this is so bad, you need to know how the registration process works. As a diving instructor, I register my students (to get them insured) through my account on the portal. I enter their personal information with their consent – name, date of birth, address, phone number, email – and the system creates an account for them. The student then receives an email with their new account credentials: a numeric user ID and a default password. They might log in to complete additional information, or they might never touch the portal again. When I registered three students in quick succession, they were sitting right next to me and checked their welcome emails. The user IDs were nearly identical – sequential numbers, one after the other. That’s when it clicked that something really bad was going on. Now here’s the problem: the portal used incrementing numeric user IDs for login. User XXXXXX0, XXXXXX1, XXXXXX2, and so on. That alone is a red flag, but it gets worse: every account was provisioned with a static default password that was never enforced to be changed on first login. And many users – especially students who had their accounts created for them by their instructors – never changed it. So the “authentication” to access a user’s full profile – name, address, phone number, email, date of birth – was: Guess a number. Type the same default password that every account shares on account creation. There’s a good chance you get in. That’s it. No rate limiting. No account lockout. No MFA. Just an incrementing integer and a password that might as well have been password123 . I

Source: Hacker News | Original Link