Advice, not control: the role of Remote Assistance in Waymo’s operations

Waymo – From the road Beginning fully autonomous operations with the 6th-generation Waymo Driver Waymo will begin fully autonomous operations with its 6th-generation Driver —an important step in bringing our technology to more riders in more cities. This latest system serves as the primary engine for our next era of expansion, with a streamlined configuration that drives down costs while maintaining our uncompromising safety standards. Designed for long-term growth across multiple vehicle platforms, this system’s expanded capabilities allow us to safely broaden our footprint into more diverse environments, including those with extreme winter weather, at an even greater scale. Satish Jeyachandran, Vice President of Engineering February 12, 2026 Read more Your web browser does not support this video. Your web browser does not support this video. The Waymo World Model: A New Frontier For Autonomous Driving Simulation We are excited to introduce the Waymo World Model, a frontier generative model that sets a new bar for large-scale, hyper-realistic autonomous driving simulation. Chiyu Max Jiang, Xander Masotto, Bo Sun February 6, 2026 Read more Accelerating our global growth: Waymo raises $16 billion investment round Today, Waymo enters a new era. We have raised a $16 billion investment round valuing the company at $126 billion post-money. This capital underscores that the age of autonomous mobility at scale has arrived, and Waymo is leading the way. In addition to Alphabet’s strong sustained support as our majority investor, the financing was led by Dragoneer Investment Group, DST Global, and Sequoia Capital, and included significant investments from Andreessen Horowitz and Mubadala Capital as well as Bessemer Venture Partners, Silver Lake, Tiger Global, and T. Rowe Price. Additional investors included BDT & MSD Partners, CapitalG, Fidelity Management & Research Company, GV, Kleiner Perkins, Perry Creek Capital, and Temasek, underscoring the Waymo Driver’s status as the world’s most trusted, proven, and scalable solution for the future of transportation. Tekedra Mawakana and Dmitri Dolgov, Waymo Co-CEOs February 2, 2026 Read more Advice, not control: the role of Remote Assistance in Waymo’s operations Our mission at Waymo is to be the world’s most trusted driver, and we are committed to earning the public’s trust through transparency and proven road safety outcomes. As we expand around the world, we are investing in an operations team that matches our global scale, including U.S.-based operations personnel and global operations teams, to ensure seamless, safe, 24/7 operations worldwide. In 2024 we published a detailed outline describing how our Remote Assistance (RA) works. Though some have compared the function to aircraft dispatch, based on my decade as a U.S. Naval aviator flying F-14s and F/A-18s, I can firmly say that analogy is wrong. Aircraft dispatch is responsible for active flight monitoring, weather routing, and mechanical oversight for th

Source: Hacker News | Original Link

Thank HN: You helped save 33k lives

Thank HN: You helped save 33k lives | Hacker News Hacker News new | past | comments | ask | show | jobs | submit login Thank HN: You helped save 33k lives 58 points by chaseadam17 1 hour ago | hide | past | favorite | 7 comments 13 years ago, we launched Watsi.org with a Show HN [1]. For nearly a year, this community drove so much traffic that we couldn’t list patients fast enough. Then pg saw us on HN, wrote us our first big check, and accepted us as the first YC nonprofit (W13). The next few years were a whirlwind. I was a young, naive founder with just enough experience to know I wanted Watsi to be more efficient, transparent, and innovative than most nonprofits. We spent 24/7 talking to users and coding. We did things that don’t scale. We tried our best to be walking, talking pg essays. Over the years we learned that product/market fit is different for nonprofits. Not many people wake up and think, “I’d love to donate to a nonprofit today” with the same oomph that they think, “I’d love a coffee” or “I’d like to make more money.” No matter how much effort we put into fundraising, donations grew linearly, while requests for care grew exponentially. I felt caught in the middle. After investing everything I had, I eventually burned out and transitioned to the board. I made a classic founder mistake and intertwined my self-worth with Watsi’s success. I believed that if I could somehow help every patient, I was a good person, but if I let down some patients, which became inevitable, I was a bad person. This was exacerbated by seeing our for-profit YC batch mates raise massive rounds. I felt like a failure for not scaling Watsi faster, but eventually we accepted reality and set Watsi on more of a slow, steady, and sustainable trajectory. Now that I have perspective, I’m incredibly proud of what the org has accomplished and grateful to everyone who has done a tour of duty to support us. Watsi donors have donated over $20M to fund 33,241 surgeries, and we have a good shot of helping patients for a long time to come. In a world of fast growth and fast crashes, here’s a huge thank you to the HN users who have stuck by Watsi, or any other important cause, even when it’s not on the front page. I believe it embodies the best of humanity. Thanks HN! [1] http://news.ycombinator.com/item?id=4424081 help aftergibson 15 minutes ago | next [–] You should be unbelievably proud of what you’ve achieved, and it’s lovely to be reminded of the amazing things people can accomplish amongst the backdrop of almost deafeningly negative sentiment going around. Thanks for doing what you do and for sharing your story! reply chaseadam17 10 minutes ago | parent | next [–] Thank you 🙂 Watsi is lucky to have an incredible team and medical partners, who work in some of the most challenging environments to provide care to patients. reply BloondAndDoom 10 minutes ago | prev | next [–] You did a great thing! Thank you. One thing I always thought of converging businesses with helping

Source: Hacker News | Original Link

BarraCUDA Open-source CUDA compiler targeting AMD GPUs

GitHub – Zaneham/BarraCUDA: Open-source CUDA compiler targeting AMD GPUs (and more in the future!). Compiles .cu to GFX11 machine code. Skip to content You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert Zaneham / BarraCUDA Public Notifications You must be signed in to change notification settings Fork 4 Star 65 Open-source CUDA compiler targeting AMD GPUs (and more in the future!). Compiles .cu to GFX11 machine code. License Apache-2.0 license 65 stars 4 forks Branches Tags Activity Star Notifications You must be signed in to change notification settings Zaneham/BarraCUDA master Branches Tags Go to file Code Open more actions menu Folders and files Name Name Last commit message Last commit date Latest commit History 12 Commits 12 Commits src src tests tests .gitignore .gitignore LICENSE LICENSE Makefile Makefile README.md README.md ROADMAP.txt ROADMAP.txt View all files Repository files navigation BarraCUDA An open-source CUDA compiler that targets AMD GPUs, with more architectures planned. Written in 15,000 lines of C99. Zero LLVM dependency. Compiles .cu files straight to GFX11 machine code and spits out ELF .hsaco binaries that AMD GPUs can actually run. This is what happens when you look at NVIDIA’s walled garden and think “how hard can it be?” The answer is: quite hard, actually, but I did it anyway. What It Does Takes CUDA C source code, the same .cu files you’d feed to nvcc , and compiles them to AMD RDNA 3 (gfx1100) binaries. No LLVM. No HIP translation layer. No “convert your CUDA to something else first.” Just a lexer, a parser, an IR, and roughly 1,700 lines of hand-written instruction selection that would make a compiler textbook weep. ┌──────────────────────────────────────────────────────────────┐ │ BarraCUDA Pipeline │ ├──────────────────────────────────────────────────────────────┤ │ Source (.cu) │ │ ↓ │ │ Preprocessor → #include, #define, macros, conditionals │ │ ↓ │ │ Lexer → Tokens │ │ ↓ │ │ Parser (Recursive Descent) → AST │ │ ↓ │ │ Semantic Analysis → Type checking, scope resolution │ │ ↓ │ │ BIR (BarraCUDA IR) → SSA form, typed instructions │ │ ↓ │ │ mem2reg → Promotes allocas to SSA registers │ │ ↓ │ │ Instruction Selection → AMDGPU machine instructions │ │ ↓ │ │ Register Allocation → VGPR/SGPR assignment │ │ ↓ │ │ Binary Encoding → GFX11 instruction words │ │ ↓ │ │ ELF Emission → .hsaco ready for the GPU │ │ ↓ │ │ Your kernel runs on ya silicon │ └──────────────────────────────────────────────────────────────┘ Every single encoding has been validated against llvm-objdump with zero decode failures. I didn’t use LLVM to compile, but I did use it to check my homework. Building # It’s C99. It builds with gcc. There are no dependencies. make # That’s it. No cmake. No autoconf. No 47-step build process. # If this doesn’t work, your gcc is bro

Source: Hacker News | Original Link

Meta to retire messenger desktop app and messenger.com in April 2026

Meta to retire messenger desktop app and messenger.com in april 2026; users shift to web and mobile platforms Nation Entertainment World Sports Lifestyle Asia February 18, 2026 Wednesday loading… Search nation entertainment world sports lifestyle Load More Posts… All posts loaded… Meta to retire messenger desktop app and messenger.com in april 2026; users shift to web and mobile platforms World Meta to retire messenger desktop app and messenger.com in april 2026; users shift to web and mobile platforms by Elijah Gaven Mitra 17 February 2026 Photo from Meta Meta, the parent company of Facebook and Messenger, has announced that its standalone Messenger desktop application and the Messenger.com website will no longer be available starting April 2026. The move marks the final phase in Meta’s gradual retirement of desktop‑focused messaging interfaces. Meta confirmed that users attempting to access messaging services via Messenger.com on desktop computers after the shutdown date will be automatically redirected to Facebook.com/messages to continue their conversations or will need to use the Messenger mobile app on iOS and Android devices. The Messenger desktop app for macOS and Windows had already been discontinued in December 2025, with Meta removing the apps from official stores and encouraging users to transition to web‑based messaging well before April 2026. This policy change reflects a broader strategic shift by Meta toward browser‑based and mobile messaging, rather than maintaining separate native desktop clients, which historically saw less usage compared to mobile versions. Meta has advised users to enable features like secure storage and PIN protection in their Messenger settings to ensure that encrypted chat history remains accessible across devices once the desktop and standalone web service are gone. This is especially relevant for users who relied on Messenger without a Facebook account, as they will still be able to access chats on mobile. The retirement of the desktop app and separate web interface is part of Meta’s effort to simplify its communication ecosystem and focus on unified, browser‑first platforms that are easier to update and integrate with new features. Industry analysts see this trend as part of a larger move away from traditional native clients toward centralized web and mobile experiences. Users and tech communities have shared mixed reactions online, with some lamenting the loss of the standalone desktop experience while others adjust to using Messenger through web browsers or mobile devices. Share Related Topics # Meta # FacebookMessenger RELATED STORIES Watch Live listen Live DZRH News Live Streaming Listen Live Home categories Latest Most Read

Source: Hacker News | Original Link

Show HN: AsteroidOS 2.0 – Nobody asked, we shipped anyway

AsteroidOS 2.0 Released – AsteroidOS News ⟩ AsteroidOS 2.0 Released AsteroidOS 2.0 Has Landed Asteroids travel steadily, occasionally leaving observable distance. It has been a while since our last release, and now it’s finally here! AsteroidOS 2.0 has arrived, bringing major features and improvements gathered during its journey through community space. Always-on-Display, expanded support for more watches, new launcher styles, customizable quick settings, significant performance increases in parts of the User Interface, and enhancements to our synchronization clients are just some highlights of what to expect. Milestones Reached Always-on Display Tilt-to-wake Palm-to-sleep Heart rate monitor app Initial step counting support Music volume control Compass support Support for Bluetooth HID and Audio Design, Usability, and App Improvements New QuickPanel The former QuickSettings top menu on the homescreen has been reworked into a highly customizable QuickPanel with many more settings toggles, app short cuts and remorse timer driven power off. New App Launchers Seven more App Launcher styles have been added. Those can be selected in the new Launcher settings page. Enhanced Wallpaper and Watchface gallery Watchfaces are now paired with the user selected Wallpaper already in the Watchface gallery. Helping to find your favourite combination at a glance. Both pages received major performance improvements. Nightstand mode Use your watch as a bedside clock or simply show charging much more clearly. Selected watchfaces show a large charging status when power is connected. The nightstand settings page makes this mode very versatile. New background animation Reworked design for a more organic feeling of “breathing”. New wallpapers Extending on the well received flatmesh design, triangulated wallpapers turned out to fit beautifully. Diamonds A 2048 like game with a fresh twist. Suited nicely for small resolutions and displays. Weather app design overhaul Embracing the new possibilities Noto Sans and its vast variety of font styles offers. The weather app got refined towards better legibility and presentation of very long place names. Timer app redesign The timer app works in the background now. It got optimised for use on round watches. The design is now consistent with the stopwatch. Flashlight app Yup, it flashes light. Most useful, so it got added to the stock selection. Animated Bootsplash logo A very small touch. But yet another possibility for designers to get involved. Round screens with a flat tyre shape are now supported. Calculator app with new layout Improved button layout for easier operation and better legibility, especially on round displays. New UI elements and polished icons Improved toggles, progress bars and other UI elements by unifying the design and removing inconsistencies. More translations (49 languages) More then 20 languages added since our last release thanks to much welcome community effort. Noto Sans system font Supporting the local

Source: Hacker News | Original Link

Gentoo on Codeberg

Gentoo on Codeberg – Gentoo Linux Gentoo on Codeberg Feb 16, 2026 Gentoo now has a presence on Codeberg , and contributions can be submitted for the Gentoo repository mirror at https://codeberg.org/gentoo/gentoo as an alternative to GitHub. Eventually also other git repositories will become available under the Codeberg Gentoo organization. This is part of the gradual mirror migration away from GitHub, as already mentioned in the 2025 end-of-year review . Codeberg is a site based on Forgejo , maintained by a dedicated non-profit organization , and located in Berlin, Germany. Thanks to everyone who has helped make this move possible! These mirrors are for convenience for contribution and we continue to host our own repositories, just like we did while using GitHub mirrors for ease of contribution too. Submitting pull requests If you wish to submit pull requests on Codeberg, it is recommended to use the AGit approach as it is more space efficient and does not require you to maintain a fork of gentoo.git on your own Codeberg profile. To set it up, clone the upstream URL and check out a branch locally: git clone [email protected]:repo/gentoo.git cd gentoo git remote add codeberg ssh://[email protected]/gentoo/gentoo git checkout -b my-new-fixes Once you’re ready to create your PR: git push codeberg HEAD:refs/for/master -o topic=”$title” and the PR should be created automatically. To push additional commits, repeat the above command – be sure that the same topic is used. If you wish to force-push updates (because you’re amending commits), add “-o force-push=true” to the above command. More documentation can be found on our wiki .

Source: Hacker News | Original Link

Re:Source 新 V 站客戶端

Re:Source 新 V 站客戶端 – V2EX 首页 注册 登录 V2EX = way to explore V2EX 是一个关于分享和探索的地方 现在注册 已注册用户请 登录 爱意满满的作品展示区。 广告 V2EX › 分享创造 Re:Source 新 V 站客戶端 kenis · 13 小时 31 分钟前 · 1655 次点击 简介 Re:Source 是一款多个论坛的 Android 原生客户端。速度快、操作流畅,遵循 Material 3 设计规范,并支持多论坛聚合模式,欢迎试用! 支持论坛 / 社区 51NB / 专门网 Chiphell HiPDA / 4D4Y LINUX DO V2EX 下载 Google Play: https://play.google.com/store/apps/details?id=io.lv1.resource​ GitHub: https://github.com/kenischu/ReSource/releases​ 客户端 论坛 设计 23 条回复 • 2026-02-17 23:32:28 +08:00 1 kenis OP 13 小时 3 分钟前 via Android 1 2014 年,我曾在 V2EX 发过一篇 “V2EX iOS app concept”:/t/120447 那时刚发布第一个以个人名义完成的 App ,趁着热血熬夜用 Sketch 画了 V2EX 客户端的概念图;但越研究越发现,若要把体验做完整,许多地方离不开解析 HTML——工程量太大,于是只能暂缓。 一转眼 12 年过去。时间快得让人措手不及,而那个“总有一天要做出来”的想法,却一直没有真正离开。Re:Source 就是这份迟到的兑现:把当年的草稿与执念,落到今天能握在手里的日常。 2 randomx 12 小时 44 分钟前 chh 和 v2 老年人来捧场 我是远古潜水员 3 simonzhang0207 11 小时 35 分钟前 试用了一下,还不错 4 EricYuan1 11 小时 18 分钟前 404 5 wegbjwjm 11 小时 9 分钟前 via iPhone 快去做 ios 的 6 ios 11 小时 7 分钟前 真的 iOS 急需你加入 7 HzMz 11 小时 4 分钟前 都打不开了 8 miaoxiaomayi 10 小时 31 分钟前 打不开兄弟 9 fanxasy 10 小时 14 分钟前 并非开源 10 simonzhang0207 10 小时 6 分钟前 给的两个链接都死了。上 Google play 还能正常搜到。 11 xuromky 9 小时 54 分钟前 搜不到的看 op 上一个帖子 12 wutiao 8 小时 45 分钟前 非常好用 13 ios 7 小时 39 分钟前 https://play.google.com/store/apps/details?id=io.lv1.resource 14 ios 7 小时 37 分钟前 https://github.com/kenischu/ReSource 15 zachary99 7 小时 29 分钟前 可以用于 pt 平台么 16 win8en 7 小时 15 分钟前 bug 还挺多,加油。回复电机表情图标,自动退出 17 Evergreen 6 小时 52 分钟前 via Android 能支持 2libra 吗 18 Updated 4 小时 49 分钟前 楼主终于在 v2 发帖了 19 Bio 4 小时 36 分钟前 链接都失效了 20 970749518nkq 4 小时 2 分钟前 via Android 登入时好像卡在 V2 的 2FA 了 难道是我网络问题吗 21 kylinj 3 小时 25 分钟前 支持一下,也一直想做个这样的聚合 app ,但没行动 22 so898 3 小时 25 分钟前 唉,当代软件滥用 Github 已经成习惯了么…… 就开源个 readme ,把 Github 拿着当论坛用的软件真的是越来越多了…… 23 Lyet813 3 小时 17 分钟前 via Android 开源了个 readme 可还行 关于 · 帮助文档 · 自助推广系统 · 博客 · API · FAQ · Solana · 710 人在线 最高记录 6679 · Select Language 创意工作者们的社区 World is powered by solitude VERSION: 3.9.8.5 · 24ms · UTC 18:50 · PVG 02:50 · LAX 10:50 · JFK 13:50 ♥ Do have faith in what you’re doing. ❯

Source: V2EX | Original Link

magsafe 外置电池能减少电池循环吗

magsafe 外置电池能减少电池循环吗 – V2EX 首页 注册 登录 V2EX = way to explore V2EX 是一个关于分享和探索的地方 现在注册 已注册用户请 登录 V2EX › iPhone magsafe 外置电池能减少电池循环吗 mangmaimu · 17 小时 3 分钟前 via iPhone · 1423 次点击 对电池寿命有好处吗?听说第三方也有能达到外置电池的充电宝 电池 充电宝 23 条回复 • 2026-02-18 02:29:30 +08:00 1 steveshi 16 小时 51 分钟前 并不能 2 JoshTheLegend 15 小时 49 分钟前 那玩意就一充电宝 3 dilidilid 15 小时 44 分钟前 不理解为啥要这么在意电池寿命,这几代电池只要不是整天抱着玩二游用三年一般也有 85%以上,京东官方授权的原装电池也就四百多 4 cosmosz 15 小时 43 分钟前 看哪种,官方的 MagSafe Battery Pack (停售了)是变相可以的。 MagSafe Battery Pack 跳过电池直接供电 vs 普通的外置电池 -> 充电 -> 供电 5 sorakado 15 小时 15 分钟前 2 比起电池循环,用 MagSafe 外接电池带来的发热对电池的损害才更大吧 6 zsqduke 15 小时 5 分钟前 via iPhone 1 @ cosmosz 跳过电池直接供电,这不可能吧,有线也不可能。无线充电更不可能 7 LotusChuan 15 小时 1 分钟前 不能,我 15 在家用 MagSafe 供电,在外用小米的磁吸充电宝。两年半不到电池容量剩 77%,循环 700+,相比用充电线的电池寿命下降更快 8 gogo_tutu 14 小时 53 分钟前 @ dilidilid 我也不理解,但是很多人都会想着之后出二手,估计健康度太低影响价格 9 talus 14 小时 52 分钟前 via iPhone 1 建议把 magesafe 的钱留来换电池,哪怕是原装呢 10 CivAx 13 小时 55 分钟前 你给电池接个电暖片还希望能减少电池健康度损耗? 11 7gugu 13 小时 49 分钟前 @ zsqduke 更早期的 Smart Battery Case 是可以的,它是走的主板供电。跟现在 Masgsafe 版本是不同的,现在 Magsafe 版本就不行了,因为是用了电池做中转的,不再是主板供电了。 12 dfdd1811 13 小时 35 分钟前 买那种插线的磁吸充电宝,充电靠线,磁吸就是吸着,可能能省点电池 13 jiaslbang 13 小时 32 分钟前 外置电池的设计精髓,就是尽可能保持手机在涓流充电的工作区间,理论上是有利于寿命的 14 zsqduke 13 小时 17 分钟前 via iPhone @ 7gugu 我从电路层面试图理解的话,之前有个讨论边冲边放的帖子。电芯层面层面不存在边冲边放,电芯只有一个正极一个负极,只能充或者放,直接并联就取决于哪边电压大,当然手机里没那么简单,都有管理芯片 外部电源接在电池上和接在主板上其实有什么区别么,如果系统上不能充电的话就没区别吧 然后如果电池已经充满的状态下也是没区别的。就是你外部电池给电池充电的时候,是不可能阻止外部电池给主板供电的, 最终能不能降低内置电池循环数就看能不能强制停止充电 15 zsqduke 13 小时 16 分钟前 via iPhone @ zsqduke 如果系统上不能充电 -> 如果系统上不能停止充电 16 WuSiYu 12 小时 33 分钟前 更可能因为更热反而折寿 17 zhhmax 12 小时 26 分钟前 买 MagSafe 不如使劲造,然后用买 MagSafe 的钱换官方电池。 18 ajyz 11 小时 58 分钟前 我有充电宝,但一般给 switch2 等掌机用为主,iPhone 还是 MagSafe 用得多,因为便携且方便,我买的官方的,以前用 iPhone14Pro 的时候还是有明显发热的,现在换成 17Pro ,起码冬天用一点都不热(我不玩游戏,就日常使用)。MagSafe 能带来 50%的电量加持,所以像现在走亲戚,已经完全够刷一整天了 19 ios 11 小时 39 分钟前 我首发的 MagSafeBatteryPack 已经用废了… 20 benjaminliangcom 7 小时 2 分钟前 @ zsqduke #6 类似现在安卓机都有的旁路充电吧 21 zhoucan007 6 小时 9 分钟前 via iPhone 已经有人做过实验了,不能。苹果策略是 5%反复充放 22 gigishy 1 小时 15 分钟前 via iPhone @ zsqduke 以前某些版本,比如 iPhone11 ,官方会有一个电池手机壳,好像那个会先用掉电池,才用手机电池——年代久远了,我也只是依稀记得,不过我有 11pm ,也有那个官方电池包,有时间我做个实验。 23 IvanLi127 20 分钟前 @ zsqduke 仅外部电源给系统供电是可以的,现在主流电源管理芯片很多是支持(或者说我没见过不支持)电源路径管理的,这个芯片就能实现系统的电是从外部输入、电池输入、外部输入且给电池充电甚至外部输入且电池放电补充外部输入不足的功率。 关于 · 帮助文档 · 自助推广系统 · 博客 · API · FAQ · Solana · 710 人在线 最高记录 6679 · Select Language 创意工作者们的社区 World is powered by solitude VERSION: 3.9.8.5 · 23ms · UTC 18:50 · PVG 02:50 · LAX 10:50 · JFK 13:50 ♥ Do have faith in what you’re doing. ❯

Source: V2EX | Original Link

Claude Sonnet 4.6

Product Introducing Claude Sonnet 4.6 Feb 17, 2026 Claude Sonnet 4.6 is our most capable Sonnet model yet . It’s a full upgrade of the model’s skills across coding, computer use, long-context reasoning, agent planning, knowledge work, and design. Sonnet 4.6 also features a 1M token context window in beta. For those on our Free and Pro plans , Claude Sonnet 4.6 is now the default model in claude.ai and Claude Cowork . Pricing remains the same as Sonnet 4.5, starting at $3/$15 per million tokens. Sonnet 4.6 brings much-improved coding skills to more of our users. Improvements in consistency, instruction following, and more have made developers with early access prefer Sonnet 4.6 to its predecessor by a wide margin. They often even prefer it to our smartest model from November 2025, Claude Opus 4.5. Performance that would have previously required reaching for an Opus-class model—including on real-world, economically valuable office tasks —is now available with Sonnet 4.6. The model also shows a major improvement in computer use skills compared to prior Sonnet models. As with every new Claude model, we’ve run extensive safety evaluations of Sonnet 4.6, which overall showed it to be as safe as, or safer than, our other recent Claude models. Our safety researchers concluded that Sonnet 4.6 has “a broadly warm, honest, prosocial, and at times funny character, very strong safety behaviors, and no signs of major concerns around high-stakes forms of misalignment.” Computer use Almost every organization has software it can’t easily automate: specialized systems and tools built before modern interfaces like APIs existed. To have AI use such software, users would previously have had to build bespoke connectors. But a model that can use a computer the way a person does changes that equation. In October 2024, we were the first to introduce a general-purpose computer-using model. At the time, we wrote that it was “still experimental—at times cumbersome and error-prone,” but we expected rapid improvement. OSWorld , the standard benchmark for AI computer use, shows how far our models have come. It presents hundreds of tasks across real software (Chrome, LibreOffice, VS Code, and more) running on a simulated computer. There are no special APIs or purpose-built connectors; the model sees the computer and interacts with it in much the same way a person would: clicking a (virtual) mouse and typing on a (virtual) keyboard. Across sixteen months, our Sonnet models have made steady gains on OSWorld. The improvements can also be seen beyond benchmarks: early Sonnet 4.6 users are seeing human-level capability in tasks like navigating a complex spreadsheet or filling out a multi-step web form, before pulling it all together across multiple browser tabs. The model certainly still lags behind the most skilled humans at using computers. But the rate of progress is remarkable nonetheless. It means that computer use is much more useful for a range of work tasks—and that substant

Source: Hacker News | Original Link

Using go fix to modernize Go code

Using go fix to modernize Go code – The Go Programming Language The Go Blog Using go fix to modernize Go code Alan Donovan 17 February 2026 The 1.26 release of Go this month includes a completely rewritten go fix subcommand. Go fix uses a suite of algorithms to identify opportunities to improve your code, often by taking advantage of more modern features of the language and library. In this post, we’ll first show you how to use go fix to modernize your Go codebase. Then in the second section we’ll dive into the infrastructure behind it and how it is evolving. Finally, we’ll present the theme of “self-service” analysis tools to help module maintainers and organizations encode their own guidelines and best practices. Running go fix The go fix command, like go build and go vet , accepts a set of patterns that denote packages. This command fixes all packages beneath the current directory: $ go fix ./… On success, it silently updates your source files. It discards any fix that touches generated files since the appropriate fix in that case is to the logic of the generator itself. We recommend running go fix over your project each time you update your build to a newer Go toolchain release. Since the command may fix hundreds of files, start from a clean git state so that the change consists only of edits from go fix; your code reviewers will thank you. To preview the changes the above command would have made, use the -diff flag: $ go fix -diff ./… — dir/file.go (old) +++ dir/file.go (new) – eq := strings.IndexByte(pair, ‘=’) – result[pair[:eq]] = pair[1+eq:] + before, after, _ := strings.Cut(pair, “=”) + result[before] = after … You can list the available fixers by running this command: $ go tool fix help … Registered analyzers: any replace interface{} with any buildtag check //go:build and // +build directives fmtappendf replace []byte(fmt.Sprintf) with fmt.Appendf forvar remove redundant re-declaration of loop variables hostport check format of addresses passed to net.Dial inline apply fixes based on ‘go:fix inline’ comment directives mapsloop replace explicit loops over maps with calls to maps package minmax replace if/else statements with calls to min or max … Adding the name of a particular analyzer shows its complete documentation: $ go tool fix help forvar forvar: remove redundant re-declaration of loop variables The forvar analyzer removes unnecessary shadowing of loop variables. Before Go 1.22, it was common to write `for _, x := range s { x := x … }` to create a fresh variable for each iteration. Go 1.22 changed the semantics of `for` loops, making this pattern redundant. This analyzer removes the unnecessary `x := x` statement. This fix only applies to `range` loops. By default, the go fix command runs all analyzers. When fixing a large project it may reduce the burden of code review if you apply fixes from the most prolific analyzers as separate code changes. To enable only specific analyzers, use the flags matching their names. For e

Source: Hacker News | Original Link

Chess engines do weird stuff

Chess Engines Do Weird Stuff Chess Engines Do Weird Stuff Things LLM people can learn from Training method Since AlphaZero, lc0-style chess engines have been trained with RL. Specifically, you have the engine (search + model) play itself a bunch of times, and train the model to predict the outcome of the game. It turns out this isn’t necessary. Good model vs bad model is ~200 elo, but search is ~1200 elo , so even a bad model + search is essentially an oracle to a good model without, and you can distill from bad model + search → good model. So RL was necessary in some sense only one time. Once a good model with search was trained, every future engine (including their competitors!) 1 can distill from that, and doesn’t have to generate games (expensive). lc0 trained their premier model, BT4, with distillation and it got worse when you put it in the RL loop. What makes distillation from search so powerful? People often compare this to distilling from best-of-n in RL, which I think is limited — a chess engine that runs the model on 50 positions is roughly equivalent to a model 30x larger, whereas LLM best-of-50 is generously worth a model 2x larger. Perhaps this was why people wanted test-time search to work so badly when RLVR was right under their noses. Training at runtime A recent technique is applying the distillation trick at runtime . At runtime, you evaluate early positions with your NN, then search them and get a more accurate picture. If your network says the position is +0.15 pawns better than search says, subtract 0.15 pawns from future evaluations. Your network live adapts to the position it’s in! Training on winning The fundamental training objective of distilling from search is almost but not quite what we actually care about: winning. It’s very correlated, but we don’t actually care about how well the model estimates one position, we care about how well it performs after search , after looking at 100 positions. To fix this, lc0 uses a weird technique called SPSA: you randomly perturb the weights in two directions, play a bunch of games, and go the direction that wins more. 2 This works very well and can get +50 elo on small models. 3 Consider for a moment how insane it is that this works at all. You’re modifying the weights in purely random directions. You have no gradient whatsoever. And yet it works quite well! +50 elo is ~1.5x model size or ~a year’s worth of development effort! The main issue with this is that it’s wildly expensive. To do a single step you must play thousands of games with dozens of moves and hundreds of position inferences per move. Like LLMs, you train for a long time on a pseudo-objective that’s close to what you want, then a short time on a very expensive and limited objective that’s closer to what you want. Tuning through C++ The underlying technique of SPSA can be applied to literally any number in your chess program . Modify the number, see if it wins more or loses more, move in the direction that wins more.

Source: Hacker News | Original Link

So You Want to Build a Tunnel

So You Want to Build a Tunnel… — Practical Engineering [Note that this article is a transcript of the video embedded above.] It seems like homemade tunnels are kind of having a moment. Just about everywhere I look, it feels like someone is carving new spaces from the ground and documenting the process online. Colin Furze might be the quintessential example, with his wild tunnel project connecting his shop and house to an underground garage. You can watch the entire process in a series of videos on his YouTube channel, and he even started a second channel to share more details of the build. But he’s far from the only one. TikTok creator Kala, lovingly nicknamed “Tunnel Girl,” has been sharing the almost entirely solo excavation of a tunnel system below her house, amassing more than a million followers in the process. Zach from the JerryRigEverything channel has an ongoing series about a massive underground bunker project. Not strictly a tunnel, but in the same spirit. In Wisconsin, Eric Sutterlin and a team of volunteers have built Sandland, which features a maze of sandstone tunnels in the hillside that can occasionally be seen on the Save It For Parts Channel. My friend, Brent, bought the abandoned mining town of Cerro Gordo and regularly explores the shafts and drifts on his channel, Ghost Town Living. And there are lots more. Wikipedia has a whole page about “Hobby Tunneling,” which it defines as “tunnel construction as a pastime.” There’s something captivating about subterranean construction, delving into the deep, carving habitable space from the earth. In one case in Toronto, a tunnel was discovered in a public park, sparking headlines worldwide and fueling wild conspiracy theories about terrorist plots. Turns out, it was just a guy who liked digging. When he was interviewed by Macleans, he said (quote) “Honestly, I loved it so much. I don’t know why I loved it. It was just something so cool…” What more can you say than that? Some of us just yearn for the mines. Plenty of people have front yards and back yards, but not everyone has an underyard. But the thing is: underground construction is pretty dangerous. And not only that; it also poses a lot of very unique engineering challenges that a hobbyist might not be prepared to solve. So I thought it might be fun to do a little exploration into modern tunnel construction methods used in public infrastructure and how those lessons can be applied to endeavors of the more homemade variety. Don’t take it as advice; I am a civil engineer, but I’m not your civil engineer. That said, maybe I can at least give you a sense of what’s involved in a project like this, and some things you might want to study further before you get out the pickaxe and helmet light. I’m Grady, and this is Practical Engineering. I think one of the reasons that tunneling is so awesome is that the underground seems like a kind of no man’s land. It’s a different kind of wilderness – unexplored territory in a world where everyth

Source: Hacker News | Original Link

Async/Await on the GPU

Async/await on the GPU – VectorWare At VectorWare , we are building the first GPU-native software company . Today, we are excited to announce that we can successfully use Rust’s Future trait and async / await on the GPU. This milestone marks a significant step towards our vision of enabling developers to write complex, high-performance applications that leverage the full power of GPU hardware using familiar Rust abstractions. Concurrent programming on the GPU GPU programming traditionally focuses on data parallelism. A developer writes a single operation and the GPU runs that operation in parallel across different parts of the data. fn conceptual_gpu_kernel (data) { // All threads in all warps do the same thing to different parts of data data[thread_id] = data[thread_id] * 2 ; } This model works well for standalone and uniform tasks such as graphics rendering, matrix multiplication, and image processing. As GPU programs grow more sophisticated, developers use warp specialization to introduce more complex control flow and dynamic behavior. With warp specialization, different parts of the GPU run different parts of the program concurrently. fn conceptual_gpu_kernel (data) { let communication = … ; if warp == 0 { // Have warp 0 load data from main memory load (data, communication); } else if warp == 1 { // Have warp 1 compute A on loaded data and forward it to B compute_A (communication); } else { // Have warp 2 and 3 compute B on loaded data and store it compute_B (communication, data); } } Warp specialization shifts GPU logic from uniform data parallelism to explicit task-based parallelism. This enables more sophisticated programs that make better use of the hardware. For example, one warp can load data from memory while another performs computations to improve utilization of both compute and memory. This added expressiveness comes at a cost. Developers must manually manage concurrency and synchronization because there is no language or runtime support for doing so. Similar to threading and synchronization on the CPU, this is error-prone and difficult to reason about. Better concurrent programming on the GPU There are many projects that aim to provide the benefits of warp specialization without the pain of manual concurrency and synchronization. JAX models GPU programs as computation graphs that encode dependencies between operations. The JAX compiler analyzes this graph to determine ordering, parallelism, and placement before generating the program that executes. This allows JAX to manage and optimize execution while presenting a high-level programming model in a Python-based DSL. The same model supports multiple hardware backends, including CPUs and TPUs, without changing user code. Triton expresses computation in terms of blocks that execute independently on the GPU. Like JAX, Triton uses a Python-based DSL to define how these blocks should execute. The Triton compiler lowers block definitions through a multi-level pipeline of MLIR dialects ,

Source: Hacker News | Original Link

春节假期前两天,绿色、智能、健康类消费需求旺盛

春节假期前两天,绿色、智能、健康类消费需求旺盛-36氪 账号设置 我的关注 我的收藏 申请的报道 退出登录 登录 搜索 36氪Auto 数字时氪 未来消费 智能涌现 未来城市 启动Power on 36氪出海 36氪研究院 潮生TIDE 36氪企服点评 36氪财经 职场bonus 36碳 后浪研究所 暗涌Waves 硬氪 氪睿研究院 媒体品牌 企业号 企服点评 36Kr研究院 36Kr创新咨询 企业服务 核心服务 城市之窗 政府服务 创投发布 LP源计划 VClub VClub投资机构库 投资机构职位推介 投资人认证 投资人服务 寻求报道 36氪Pro 创投氪堂 企业入驻 创业者服务 创投平台 AI测评网 首页 快讯 资讯 推荐 财经 AI 自助报道 四川 最新 创投 汽车 科技 专精特新 直播 视频 专题 活动 搜索 寻求报道 我要入驻 城市合作 02 月 17 春节假期前两天,绿色、智能、健康类消费需求旺盛 2026-02-17 16:57 分享至 打开微信“扫一扫”,打开网页后点击屏幕右上角分享按钮 商务大数据显示,假期前两天,全国重点零售和餐饮企业日均销售额较2025年春节前两天增长10.6%。2月15日,商务部重点监测的78个步行街(商圈)客流量、营业额比去年假期第一天分别增长23.2%、33.2%。以旧换新充分释放消费需求。截至2月16日,2026年消费品以旧换新惠及2755.6万人次,带动销售额1930.9亿元。其中,汽车以旧换新60.7万辆,带动新车销售额995.6亿元。绿色、智能、健康消费需求旺盛。商务大数据显示,2月15日,重点平台智能穿戴设备销售额增长1.3倍,智能血压仪、血糖仪增长超60%,有机食品增长52%。(央视新闻) 原文链接 下一篇 今起对加拿大英国持普通护照者免签 为进一步便利中外人员往来,今天(2月17日)起,我国对加拿大、英国持普通护照人员实施免签政策。两国持普通护照人员来华经商、旅游观光、探亲访友、交流访问、过境不超过30天,可免办签证入境。相关免签政策实施至今年12月31日。(CCTV国际时讯) 8小时前 24小时热榜 查看更多榜单 揭秘春晚“两个蔡明”背后,这家机器人公司的百日奋战 如何通过三个问题识破谎言 在AI和机器人里找节目:马年春晚背后的科技大战揭秘 15小时前 这个春节999元租个人形机器人拜年,然后呢? 15小时前 “今年春节,我终于跟它们和解了” 14小时前 对话王兴兴:搜遍全世界武术招式,宇树如何超越宇树 9小时前 最长春节假期,你需要的异地过年攻略 15小时前 AI上春晚:一场十四亿人的验收 5小时前 关于36氪 城市合作 寻求报道 我要入驻 投资者关系 商务合作 关于我们 联系我们 加入我们 36氪欧洲站 36氪欧洲站 36氪欧洲站 Ai产品日报 网络谣言信息举报入口 热门推荐 热门资讯 热门产品 文章标签 快讯标签 合作伙伴 36氪APP下载 iOS & Android 本站由 阿里云 提供计算与安全服务 违法和不良信息、未成年人保护举报电话:010-89650707 举报邮箱:[email protected] 网上有害信息举报 © 2011~ 2026 北京多氪信息科技有限公司 | 京ICP备12031756号-6 | 京ICP证150143号 | 京公网安备11010502057322号 意见反馈 36氪APP 让一部分人先看到未来 36氪 鲸准 氪空间 推送和解读前沿、有料的科技创投资讯 一级市场金融信息和系统服务提供商 聚焦全球优秀创业者,项目融资率接近97%,领跑行业

Source: 36Kr | Original Link