Dinosaur Food: 100M year old foods we still eat today (2022)

Boris Cherny’s Blog Dinosaur Food: 100 million year old foods we still eat today | Boris Cherny’s Blog Dinosaur Food: 100 million year old foods we still eat today January 17, 2022 I just finished Oliver Sacks’ excellent Everything in Its Place . In it, he mentioned as an aside that the Ginkgo biloba tree is hundreds of millions of years old, and its phenotype has been practically frozen since then – a living fossil. Of course, this is the same tree that grows ぎんなん (Ginkgo nuts), an East Asian delicacy found in many dishes, 茶碗蒸し (Chawanmushi) for example. Ginkgo has been around so long, it predates the dinosaurs! And we still eat it! How cool is that. This got me thinking – what are the oldest foods we consume today? Criteria: Must be edible by humans Must be morphologically unchanged since its fossil age Photo Kingdom Species Common name Age (years) Animalia Tachypleus tridentatus Horseshoe crab 480M Plantae Ginkgo biloba Maidenhair nuts 290M Plantae Bryoria fremontii Wila 250M ? Plantae Cladonia rangiferina Reindeer lichen 250M ? Plantae Cycas revoluta Sago palm 200M Plantae Araucaria araucana Monkey puzzle tree nuts 160M Plantae Equisetum arvense Horsetail 140M Plantae Welwitschia – 112M Plantae Osmundastrum cinnamomeum Cinnamon fern 70M Plantae Trapa natans Water caltrop nuts 66M Plantae Nelumbo lutea, Nelumbo nucifera Lotus 65M+ Note: I’m a hobbyist, and not a paleobotanist. Additions and edits are welcome , if I misclassified or missed anything.

Source: Hacker News | Original Link

书房桌子小, Studio Display 想接两台主机(Mac Mini + Win 4090),求方案

书房桌子小, Studio Display 想接两台主机(Mac Mini + Win 4090),求方案 – V2EX 首页 注册 登录 V2EX = way to explore V2EX 是一个关于分享和探索的地方 现在注册 已注册用户请 登录 V2EX › 问与答 书房桌子小, Studio Display 想接两台主机(Mac Mini + Win 4090),求方案 everclooney007 · 7 小时 34 分钟前 · 861 次点击 背景 书房桌子比较小,只能放一台显示器。目前在用 Apple Studio Display ,主要搭配 Mac Mini 使用,日常办公效率很高,已经很习惯这套组合了。 另外还有一台 Windows 主机,配置是 RTX 4090 + 华硕 ProArt 显卡(带 Type-C 接口),主要用来打游戏和跑 4090 的算力。 现在的痛点是:两台主机想共用这一台 Studio Display ,找不到好的切换方案。 已经尝试/了解过的方案 1. 串流( Moonlight / Parsec 等) 试了一下,体验不太行,有明显延迟和画质损失,尤其打游戏的时候感知很强,暂时放弃了。 2. KVM 切换器 网上搜了一圈,支持 Studio Display ( Thunderbolt 接口)的 KVM 好像目前只有一款,价格要 2000 多,而且不确定实际体验如何,有点犹豫。 3. 来回拔插线缆 理论上可行,但 Studio Display 背后的 Thunderbolt 口不太好够到,每次拔插太麻烦了,而且频繁拔插怕伤接口,不是长久之计。 我的需求 平时 80% 时间用 Mac Mini + Studio Display 办公,这套组合不想动 偶尔切到 Windows 打游戏或用 4090 跑计算任务 切换过程尽量简单,最好不用爬到显示器后面拔线 预算合理就行,不是不能花钱,但想花得值 想问问大家 有没有靠谱的 Thunderbolt KVM 方案推荐?用过的说说体验? 有没有什么我没想到的方案,比如转接、扩展坞之类的玩法? 同样双主机的朋友是怎么解决的? 或者说,这个场景下是不是干脆再买一台支持多输入的显示器更合理?(但真的很喜欢 Studio Display 的效果…) 感谢各位大佬指点 🙏 显示器 切换 方案 30 条回复 • 2026-02-19 23:42:58 +08:00 1 zachary99 7 小时 24 分钟前 via Android 罗技鼠标键盘,flow 支持跨设备操作 2 kkk9 7 小时 20 分钟前 方便的话拍图,不一定是放不下,而是你没有思路 3 everclooney007 OP 7 小时 18 分钟前 @ zachary99 现在问题是 studio display 只有一条显示输出的口需要接两个主机哈,键盘鼠标倒是小问题。 4 everclooney007 OP 7 小时 14 分钟前 @ kkk9 小房间确实很小大概 7 ,8 平方,还摆了一个床,硬放可能可以放下第二个显示器,但是显得房间太满了。。。优先考虑单显示器两个主机的方案哈!感谢! 5 kome 6 小时 57 分钟前 如果能解决(多套)键盘鼠标的使用, 可以考虑采集卡. 6 wangritian 6 小时 41 分钟前 不如考虑一下 pc 搬客厅接电视打游戏,随便配套无线键鼠,Studio Display 好像是 60hz 也白瞎了 4090 ,需要算力就远程连接 7 tyzrj766 6 小时 26 分钟前 我是 KVM 切换器,不过我显示器是普通 4K 显示器,HDMI 接的,Studio Display 只有 C-C 没折腾过 8 ONEO666 6 小时 15 分钟前 @ everclooney007 #3 我以前用过邪道办法,搞个把 Studio Display 的 Type-C 转母口的线,这样就可以直接在桌面上换线了,成本很低 9 ONEO666 6 小时 14 分钟前 @ everclooney007 #3 类似这种,你可以试试看 10 everclooney007 OP 6 小时 6 分钟前 @ wangritian 哈哈,当然想这样。娃太小了,客厅电视就没开过。。。只能躲房间。。。 11 everclooney007 OP 6 小时 5 分钟前 @ tyzrj766 对,这个要是普通显示器就是最佳方案,现在考虑换掉 studio display ,如果找不到合适的 kvm 的话 12 alikesi 6 小时 4 分钟前 via Android 鼠标键盘买那些支持一键切换设备的,两台主机显示器接口分别用延长线引到桌面上,平时不打游戏用串流,需要打游戏时显示器的线在桌面就可以插拔了,鼠标键盘需要有线也可以这样。 13 everclooney007 OP 6 小时 3 分钟前 @ ONEO666 是个思路! 14 pheyx 5 小时 38 分钟前 studio display 的输入支持 dp over type-c,所以用个普通的 dp 切换器再加上:1.mac mini 端 typc-c 转 dp 线连接到 dp kvm;2.4090 用 dp 线;3.dp kvm 输出通过 dp-c 转接线连到显示器。这种 c-dp 线都是双向的,dp1.4 规格的就可以了。 15 allinschroe 5 小时 32 分钟前 我使用的支持 DDC/CI 协议的显示器,用命令行切换显示 “` # Mac # 安装 m1ddc brew install m1ddc # 显示当前输入源 m1ddc display 1 get input # 切换输入源 m1ddc set input 17 #Windows # 安装 ControlMyMonitor https://www.nirsoft.net/utils/control_my_monitor.html # 显示当前输入源代码 “C:\ControlMyMonitor\ControlMyMonitor.exe” /GetValue “Primary” 60 # 切换输入源 “C:\ControlMyMonitor\ControlMyMonitor.exe” /SetValue “Primary” 60 18 “` 16 kkk9 5 小时 16 分钟前 找到了,你需要的 kvm 切换器,可以支持最多 3 台设备在一台显示器上的切换(全是雷电 C 口),支持一套音频+键鼠 ATEN 宏正 CS1953 3 端口 USB-C DisplayPort 混合式 KVMP™多电脑切换器 KVM 官方 TB: https://detail.tmall.com/item.htm?id=637038960295 官方 JD: https://item.jd.com/100019286374.html 官网: https://www.aten.com.cn/cn/zh/products/kvm%E5%A4%9A%E7%94%B5%E8%84%91%E5%88%87%E6%8D%A2%E5%99%

Source: V2EX | Original Link

各位现在还在用 GPT 还是转向别的 LLM

各位现在还在用 GPT 还是转向别的 LLM – V2EX 首页 注册 登录 V2EX = way to explore V2EX 是一个关于分享和探索的地方 现在注册 已注册用户请 登录 V2EX › OpenAI 各位现在还在用 GPT 还是转向别的 LLM 1 383394544 · 13 小时 34 分钟前 via iPhone · 2436 次点击 现在涉及价值判断和社会科学的问题我都不问 GPT 了,因为它会一直端水、和稀泥、扮好人,想教育我的价值观。指责他错他还会先狡辩,然后继续装圣母输出道德正确的废话。Grok 感觉过于刻意俏皮,一定要嘻嘻哈哈,有效信息密度很低。Gemini 稍微平衡点 第 1 条附言  · 11 小时 18 分钟前 “出了暴论又要被炎上”就是问题,暴论就是错的吗?例如我这篇说”GPT 不该圣母”也是暴论,但你能说我的诉求不合理吗? 第 2 条附言  · 11 小时 13 分钟前 我说希望 AI 不要谄媚 != 要 AI 骂人,LLM 就事论事就好了,至少给个调节阀让用户决定,不是怕被骂就打安全牌。 GPT Gemini Grok 33 条回复 • 2026-02-19 22:24:08 +08:00 1 rocmax 13 小时 9 分钟前 via Android 4 这些公司废了老大劲让 ai 回答圣母一些,要不出了暴论又要被炎上。 说白了价值观的问题哪有对错,问 ai 也只是为了寻找认同罢了。 2 huaweii 13 小时 3 分钟前 via Android 善用 custom instrumentions 3 canyue7897 13 小时 1 分钟前 via iPhone 如果一直给你对着干 你是不是要揍 ai 大家都喜欢听好听的 如果上来一句:你个傻逼你懂个球,看到你这傻逼我就知道我们 ai 多么伟大 你是不是也会跳脚? ai 答案给的不好,至少提供情绪价值吧? 4 lemoncoconut 12 小时 49 分钟前 2 GPT5 的爹味和 AI 味真的溢出屏幕了 除了 codex 模型之外没用任何 OAI 的产品了 gemini/claude/grok 都还行 5 unbridle 12 小时 42 分钟前 codex 很好用,平时用 gemini 多一些,gpt 默认感觉有点蠢,代理模式还行 6 likooo125802023 12 小时 42 分钟前 4 你们老是和 AI 聊三观干嘛,这东西有那么多号聊的吗? 我反正用 ai 问任何 stem 的东西,哪家都一样,比当年搜索引擎好用太多了 7 JJC2900 12 小时 39 分钟前 @ rocmax #1 认可 8 SmithJohn 12 小时 7 分钟前 喜欢 gemini 给你输出的硬核方案吗? 9 383394544 OP 11 小时 21 分钟前 via iPhone 1 @ rocmax 我就喜欢这样用,咋了?作为付了钱的消费者,没义务忍受 GPT 的爹味。 10 383394544 OP 11 小时 16 分钟前 via iPhone @ canyue7897 重点是要让用户有选择,要冷酷回答还是有温度的回答,不是强制拉平到市场最大公约数,想讨好每个人的结果只会得罪每个人。 11 383394544 OP 11 小时 12 分钟前 via iPhone @ SmithJohn 挺喜欢的。 12 V2April 10 小时 42 分钟前 好奇,“涉及价值判断和社会科学的问题”为什么要问 ai ?具体的使用场景是怎样的? 13 rocmax 10 小时 41 分钟前 via Android @ 383394544 果然是喜欢寻求认同的类型 14 bybyte 10 小时 25 分钟前 gemini 就是太舔狗了,一直附和你谄媚你 15 Lemonadeccc 9 小时 46 分钟前 codex 主力,claude 审查完事了还让 codex 修。不问对不对,直接换其他模型审查代码就行了 16 iamnotcodinggod 9 小时 2 分钟前 1 gpt web 端问问日常问题。claude code 主力。gemini cli 当成类似 openclaw 的全自动 agent 来用。 17 383394544 OP 8 小时 57 分钟前 via iPhone @ V2April 就是生活问题啊、对政治、经济的观察啊,一些平时不方便和人说又不吐不快的牢骚啊,两性议题啊。总不能屁点大事都发在 v2 上吧( 18 KagurazakaNyaa 8 小时 51 分钟前 @ 383394544 这种不需求准确性的主观问题,为啥不考虑使用本地的 abliterated 模型呢 https://github.com/Sumandora/remove-refusals-with-transformers 19 Mar5 8 小时 43 分钟前 via iPhone 一般不和 ai 聊人生聊理想,喜欢和 grok 聊美女。 20 ivvei 8 小时 27 分钟前 via Android 为啥要拿这些问题问 AI… 21 laminux29 8 小时 27 分钟前 2 你不说你问了 GPT 什么问题,你不说 GPT 是怎么回答的,你就直接说 GPT 的不对。 我反而同情 GPT ,没办法挑选精神正常的客户。 22 brucedone 7 小时 34 分钟前 1 我主要是 coding 方面:kimi, minimax, glm 混着用;公司搭配了 claude, gpt 等一众模型;根据实际情况,需要啥用啥 23 YsHaNg 7 小时 14 分钟前 鉴定为生活节点 v2er 24 Lemonadeccc 7 小时 7 分钟前 gpt team 很便宜,用 gpt 了。然后审查代码还有测试功能遗漏用 claude 。bug fix 还是 gpt 25 BeautifulSoap 7 小时 1 分钟前 1 可以啊,到时候 AI 就事论事把你的国家,民族,信仰,尊敬的人物批得体无完肤,你又只能无能狂怒 AI 甚至都不需要故意骂用户,把知识库里收集的那些材料拿出来就行了 有的人可能无所谓,但有的宗教要是被骂了那可能真的直接去炸 AI 公司总部的。你在这说说倒是简单,但 AI 公司还是惜命的 同样道理,涉及到社会团体,关键政要的时候,AI 可以随便乱说,但 AI 公司的投资者,员工,高层可不想死 26 surfwave 6 小时 9 分钟前 别把 ai 当人,当个干活的工具就是了。 27 Stargaze 6 小时 5 分钟前 1 我感觉 gpt 自定义之后 回答的挺好的,grok 也不错 28 Admstor 5 小时 53 分钟前 标题都不通顺的人评判 AI 装圣母? 你说是要 AI 骂人,你看评论区不赞同你的,你都开始急了 叶公好龙哦 29 maolon 5 小时 31 分钟前 难道 gpt 没给你 personality 选项么, 你自己不去调去怪他和稀泥? 30 dji38838c 5 小时 11 分钟前 哈哈,还“作为付了钱的消费者”, 人家根本不服务你,你费那劲,钻了空子, 去

Source: V2EX | Original Link

Gemini 突然删除了我所有的 conversation history

Gemini 突然删除了我所有的 conversation history – V2EX 首页 注册 登录 V2EX = way to explore V2EX 是一个关于分享和探索的地方 现在注册 已注册用户请 登录 › Google Play › Google Fi › Google Developers Channel › Google 全球透明度报告 › 9to5Google V2EX › Google Gemini 突然删除了我所有的 conversation history noinil · 14 小时 3 分钟前 · 3141 次点击 真的是匪夷所思! 这样的大公司做出来的产品居然连这么基本的事情都能搞砸! 我数十条对话记录一夜之间就突然消失了! 网上也有许多相同的遭遇的记录, 大家都愤怒但是毫无办法。 Google 至今没有给合理解释和解决方案。 让我想起之前的 Google read, Google+ 等等众多产品。 这真是一个可怕而糟糕的公司 Gemini Google 记录 44 条回复 • 2026-02-19 23:16:10 +08:00 1 canyue7897 13 小时 51 分钟前 via iPhone 账号退出了吧? 一般不会 2 Lentin 12 小时 43 分钟前 看了一眼我的历史记录也木有了,感觉是加载问题吧,晚点再看看…… 3 win8en 12 小时 27 分钟前 via Android 别紧张,慢慢来。也许只是网络问题 4 datou 12 小时 23 分钟前 刚看了一下,我的对话历史都还在 5 ha0zi 12 小时 16 分钟前 网络不好时遇到过, 建议让子弹飞一会 6 patchao2000 11 小时 57 分钟前 我的也没了 7 dingguagua 11 小时 55 分钟前 plus ,也没了 8 BaiMax 11 小时 52 分钟前 via Android 我的也没了 9 jiaoyidongxi 11 小时 33 分钟前 一般 gemini 出现幻觉胡言乱语的时候,你指出它的错误,google 就会悄咪咪的和谐掉包括你的问题在内的所有相关对话 10 Lyet813 10 小时 21 分钟前 via Android 刚看了一下,我的也没了😅 11 Tathagatagarbha 10 小时 20 分钟前 1 是不是这里设置了自动删除活动记录? https://myactivity.google.com/u/0/product/gemini 12 tf141 10 小时 6 分钟前 同样也没了,似乎之前也有人遇到过,不知道是不是 BUG 13 churchmice 10 小时 3 分钟前 via Android 我也没了 14 mogita 9 小时 49 分钟前 via iPhone 遇到过,刷新几次或重新登录能出来。不过这次这么多人受影响可能真有事故了。 15 ks3825 9 小时 40 分钟前 via Android 谷歌自从换了 myactivity 就没好用过,chrome 历史记录都能丢,见怪不怪了 https://www.v2ex.com/t/1029071 16 hzy888 9 小时 29 分钟前 我的也没了,正在重新登陆翻找,切换代理呢 17 bubenkaryn 8 小时 59 分钟前 无了,我肝了两个月的东西,没有导出来备份,直接就无了. 18 wangbin526 8 小时 57 分钟前 via Android 我的全也没了,还是包年 99 的付费 plus ,但同家庭组另一个账号全都在 19 yzh2836 8 小时 34 分钟前 同样没有了,还担心是我自己的问题,看到有相似的情况反而安心了,Google 会回档处理的。 活动记录都还在,但是侧边框什么对话记录都没有了。 20 DefoliationM 8 小时 32 分钟前 我的都有。 21 dunn 8 小时 25 分钟前 via Android 看了眼,我的还在, 22 Folder 8 小时 24 分钟前 看下我的还在, 刚想松口气, 刷新一下全没了. 23 laminux29 8 小时 18 分钟前 虽然我的历史对话记录还在,但你这句话“这样的大公司做出来的产品居然连这么基本的事情都能搞砸”明显是没见识了。这种事情难道还少?《太空部队》都拍了专门的片段嘲讽 Windows 在工作时突然自动更新。 24 showonder 7 小时 59 分钟前 这是啥原理?不止一次碰到了,感觉 gemini 特别不可靠 25 sieo2021 7 小时 27 分钟前 完犊子了 也没了 26 stormscloudy 7 小时 23 分钟前 Gemini Apps Activity 上还有,可以把重要的数据先拉出来 27 LasonyaBlay 7 小时 17 分钟前 via iPhone plus 1 ,我的也没了,还好没有非常重要的东西 28 surfwave 6 小时 11 分钟前 可能是网络的问题,多刷几次应该就出来了,我经常都遇到这种情况。然后你也可以看看浏览器的广告过滤插件,也可能有影响,可以关一下再刷新。 29 Admstor 6 小时 9 分钟前 V2er 开发过一个可以自动保存所有 AI 网站对话记录的插件 https://chatmemo.ai/ 当然了,只存储文本记录,说是只本地保存 30 Aestas16 6 小时 5 分钟前 我的也没了,不知道后续会不会修复 31 Loocor 5 小时 16 分钟前 如果自动删除会话记录,可能是和 Gemini cli 的一项配置 session-retention 有关,参考 https://geminicli.com/docs/cli/session-management/#session-retention 这里的介绍,检查设置一下应该就好了 32 street000 5 小时 4 分钟前 同样没了,内容库还有东西,应该会回来 33 superkkk 4 小时 31 分钟前 我的几十条历史记录都没了,吓我一跳我操 34 cellsyx 4 小时 7 分钟前 via Android pro 订阅,没了 35 defaw 3 小时 49 分钟前 我的还在 36 lingguo 3 小时 31 分钟前 via Android 靠,吓得我赶紧把所有聊天记录导出保存本地了 37 Taxpayer 2 小时 46 分钟前 好像是更新 gemini App 造成的,不过 App 上还能搜到对话记录 38 LongLights 2 小时 44 分钟前 我的也没了,而且活动记录还存在 39 immao 2 小时 28 分钟前 我经常不见,但是换个代理之后又看到了 40 IanHo 2 小时 28 分钟前 我也是 41 fork 2 小时 25 分钟前 重刷了好几遍,没丢,一切安好。 42 eventlooped 2 小时 25 分钟前 同没了,今天早上就发现了 43 xceszzy 2 小时 11 分钟前 卧槽。 我的也没了!以前是个别会没有,但是要搜索能搜到。 今天是这么多记录,就剩下了 2 条??

Source: V2EX | Original Link

亚马逊成全球营收最高企业,终结沃尔玛13年《财富》500强霸榜

亚马逊成全球营收最高企业,终结沃尔玛13年《财富》500强霸榜_腾讯新闻 亚马逊成全球营收最高企业,终结沃尔玛13年《财富》500强霸榜 IT之家 2026-02-19 21:05 发布于 湖北 IT之家官方账号 IT之家 2 月 19 日消息,亚马逊今日正式超越沃尔玛,成为全球营收最高的企业。自 1994 年诞生于杰夫 · 贝索斯在西雅图的车库以来,亚马逊已从一家在线书店成长为电商与云计算领域的巨头。 沃尔玛此前连续十多年位居营收榜首。截至 1 月 31 日的 12 个月,沃尔玛销售额为 7132 亿美元(IT之家注:现汇率约合 4.93 万亿元人民币)。亚马逊则公布 2025 财年销售额 7170 亿美元(现汇率约合 4.96 万亿元人民币),首次实现反超。 据彭博社报道,贝索斯早年研究山姆 · 沃尔顿的经营理念, 并将其中多项策略融入亚马逊的发展路径 。过去十年,在消费重心由实体门店转向线上购物以及 Amazon Web Services 高速扩张的推动下,亚马逊营收增长速度几乎达到沃尔玛的 10 倍。 两家公司在零售领域正面竞争。亚马逊是全球最大线上零售商, 每月访问量约 27 亿次 ;沃尔玛则拥有全球超过 1 万家实体门店,是规模最大的线下零售商,两家企业的主要收入仍来自美国市场。 亚马逊登顶的关键并不完全来自零售,而是来自 云计算 业务的贡献。若剔除 AWS 收入,亚马逊 2025 年营收 仅为 5880 亿美元 (现汇率约合 4.06 万亿元人民币)。数据中心在 人工智能 时代成为关键基础设施,使云计算业务成为决定性因素。 成为全球营收第一更多体现规模与市场覆盖,并不直接代表资本市场认可。历史上,埃克森美孚与通用汽车也曾拥有这一头衔。当前全球市值最高企业为 英伟达 ,市值约 4.5 万亿美元,远高于亚马逊与沃尔玛。 贝索斯 2017 年曾超越比尔 · 盖茨成为全球首富,目前位列全球第四,资产约 2280 亿美元,主要来源于其亚马逊持股。 《财富》杂志网站撰文称,在过去的 13 年里,以及过去 24 年中的 21 年里,沃尔玛一直位居榜首,在《财富》500 强中排名第一。如果没有出现任何意外,亚马逊将在 6 月初发布的下一期《财富》500 强榜单上名列第一。

Source: Tencent News | Original Link

弟弟安德鲁被警方逮捕,英国国王发声

安德鲁被警方逮捕,英国国王发声_腾讯新闻 安德鲁被警方逮捕,英国国王发声 环球网 2026-02-19 20:45 发布于 北京 环球网官方账号 【环球网报道】据英国广播公司(BBC)19日报道,英国国王查尔斯三世就安德鲁被警方逮捕一事发表声明。 英国国王查尔斯三世 资料图 图源:视觉中国 声明称,“我怀着极度关切的心情获悉有关安德鲁·蒙巴顿·温莎的消息,以及有关其涉嫌担任公职期间行为不端的指控”,接下来的程序将是全面、公正且合理的,相关部门将以适当方式对此事进行调查。 声明进一步称,“正如我此前所言,在此过程中,他们会得到我们全心全意的支持与配合。让我明确表示,法律程序必须依法进行。在调查持续期间,我不宜就此事作进一步评论。” “与此同时,我和我的家人将继续履行我们的职责,为你们所有人服务。”声明最后写道。

Source: Tencent News | Original Link

Gemini 3.1 Pro

Gemini 3.1 Pro – Model Card — Google DeepMind Skip to main content Published 19 February 2026 Gemini 3.1 Pro View PDF version Model Cards are intended to provide essential information on Gemini models, including known limitations, mitigation approaches, and safety performance. Model cards may be updated from time-to-time; for example, to include updated evaluations as the model is improved or revised. Published: February 2026 Model Information Model Data Implementation and Sustainability Distribution Evaluation Intended Usage and Limitations Ethics and Content Safety Frontier Safety Model Information Description Gemini 3.1 Pro is the next iteration in the Gemini 3 series of models, a suite of highly capable, natively multimodal reasoning models. As of this model card’s date of publication, Gemini 3.1 Pro is Google’s most advanced model for complex tasks. Geminin 3.1 Pro can comprehend vast datasets and challenging problems from massively multimodal information sources, including text, audio, images, video, and entire code repositories. Model dependencies Gemini 3.1 Pro is based on Gemini 3 Pro. Inputs Text strings (e.g., a question, a prompt, document(s) to be summarized), images, audio, and video files, with a token context window of up to 1M. Outputs Text, with a 64K token output. Architecture Gemini 3.1 Pro is based on Gemini 3 Pro. For more information about the model architecture for Gemini 3.1 Pro, see the Gemini 3 Pro model card . Model Data Training Dataset Gemini 3.1 Pro is based on Gemini 3 Pro. For more information about the training dataset for Gemini 3.1 Pro, see the Gemini 3 Pro model card . Training Data Processing For more information about the training data processing for Gemini 3.1 Pro, see the Gemini 3 Pro model card . Implementation and Sustainability Hardware Gemini 3.1 Pro is based on Gemini 3 Pro. For more information about the hardware for Gemini 3.1 Pro and our continued commitment to operate sustainably , see the Gemini 3 Pro model card . Software Gemini 3.1 Pro is based on Gemini 3 Pro. For more information about the software for Gemini 3.1 Pro, see the Gemini 3 Pro model card . Distribution Gemini 3.1 Pro is distributed in the following channels; respective documentation shared in line: Gemini App Google Cloud / Vertex AI Google AI Studio Gemini API Google Antigravity NotebookLM Our models are available to downstream providers via an application program interface (API) and subject to relevant terms of use. There is no required hardware or software to use the model. For AI Studio and Gemini API, see the Gemini API Additional Terms of Service ; for Vertex AI, see Google Cloud Platform Terms of Service . For more information, see Gemini Model API instructions and Gemini API in Vertex AI quickstart . Evaluation Approach Gemini 3.1 Pro was evaluated across a range of benchmarks, including reasoning, multimodal capabilities, agentic tool use, multi-lingual performance, and long-context. Additional benchmarks and details on a

Source: Hacker News | Original Link

AI made coding more enjoyable

AI made coding more enjoyable To me, one of the most annoying parts of software engineering is writing code that doesn’t require thinking. It’s just a typing exercise, and that’s boring. That includes code outside of the happy path, like error handling and input validation. But also other typing exercises like processing an entity with 10 different types, where each type must be handled separately. Or propagating one property through the system on 5 different types in multiple layers. Writing tests is another enjoyable use-case. I design the architecture so the code is testable, write the first test so the AI knows how they should be written, and which cases should be tested. Then I tell the AI each test case and it writes them for me. The only thing where I don’t trust it yet is when code must be copy pasted. I can’t trace if it actually cuts and pastes code, or if the LLM brain is in between. In the latter case there may be tiny errors that I’d never find, so I’m not doing that. But maybe I’m paranoid. In any case, this is incredible. In the past years I’ve been handed tools that do the most tedious tasks of software engineering for me. And I love it. Subscribe via Email Subscribe via RSS Subscribe Personal Website Thank you for subscribing! Please check your inbox for a confirmation email. If you don’t see it, make sure to check your spam folder.

Source: Hacker News | Original Link

Gemini 3.1

Gemini 3.1 Pro – Model Card — Google DeepMind Skip to main content Published 19 February 2026 Gemini 3.1 Pro View PDF version Model Cards are intended to provide essential information on Gemini models, including known limitations, mitigation approaches, and safety performance. Model cards may be updated from time-to-time; for example, to include updated evaluations as the model is improved or revised. Published: February 2026 Model Information Model Data Implementation and Sustainability Distribution Evaluation Intended Usage and Limitations Ethics and Content Safety Frontier Safety Model Information Description Gemini 3.1 Pro is the next iteration in the Gemini 3 series of models, a suite of highly capable, natively multimodal reasoning models. As of this model card’s date of publication, Gemini 3.1 Pro is Google’s most advanced model for complex tasks. Geminin 3.1 Pro can comprehend vast datasets and challenging problems from massively multimodal information sources, including text, audio, images, video, and entire code repositories. Model dependencies Gemini 3.1 Pro is based on Gemini 3 Pro. Inputs Text strings (e.g., a question, a prompt, document(s) to be summarized), images, audio, and video files, with a token context window of up to 1M. Outputs Text, with a 64K token output. Architecture Gemini 3.1 Pro is based on Gemini 3 Pro. For more information about the model architecture for Gemini 3.1 Pro, see the Gemini 3 Pro model card . Model Data Training Dataset Gemini 3.1 Pro is based on Gemini 3 Pro. For more information about the training dataset for Gemini 3.1 Pro, see the Gemini 3 Pro model card . Training Data Processing For more information about the training data processing for Gemini 3.1 Pro, see the Gemini 3 Pro model card . Implementation and Sustainability Hardware Gemini 3.1 Pro is based on Gemini 3 Pro. For more information about the hardware for Gemini 3.1 Pro and our continued commitment to operate sustainably , see the Gemini 3 Pro model card . Software Gemini 3.1 Pro is based on Gemini 3 Pro. For more information about the software for Gemini 3.1 Pro, see the Gemini 3 Pro model card . Distribution Gemini 3.1 Pro is distributed in the following channels; respective documentation shared in line: Gemini App Google Cloud / Vertex AI Google AI Studio Gemini API Google Antigravity NotebookLM Our models are available to downstream providers via an application program interface (API) and subject to relevant terms of use. There is no required hardware or software to use the model. For AI Studio and Gemini API, see the Gemini API Additional Terms of Service ; for Vertex AI, see Google Cloud Platform Terms of Service . For more information, see Gemini Model API instructions and Gemini API in Vertex AI quickstart . Evaluation Approach Gemini 3.1 Pro was evaluated across a range of benchmarks, including reasoning, multimodal capabilities, agentic tool use, multi-lingual performance, and long-context. Additional benchmarks and details on a

Source: Hacker News | Original Link

Dinosaur Food: 100M year old foods we still eat today

Boris Cherny’s Blog Dinosaur Food: 100 million year old foods we still eat today | Boris Cherny’s Blog Dinosaur Food: 100 million year old foods we still eat today January 17, 2022 I just finished Oliver Sacks’ excellent Everything in Its Place . In it, he mentioned as an aside that the Ginkgo biloba tree is hundreds of millions of years old, and its phenotype has been practically frozen since then – a living fossil. Of course, this is the same tree that grows ぎんなん (Ginkgo nuts), an East Asian delicacy found in many dishes, 茶碗蒸し (Chawanmushi) for example. Ginkgo has been around so long, it predates the dinosaurs! And we still eat it! How cool is that. This got me thinking – what are the oldest foods we consume today? Criteria: Must be edible by humans Must be morphologically unchanged since its fossil age Photo Kingdom Species Common name Age (years) Animalia Tachypleus tridentatus Horseshoe crab 480M Plantae Ginkgo biloba Maidenhair nuts 290M Plantae Bryoria fremontii Wila 250M ? Plantae Cladonia rangiferina Reindeer lichen 250M ? Plantae Cycas revoluta Sago palm 200M Plantae Araucaria araucana Monkey puzzle tree nuts 160M Plantae Equisetum arvense Horsetail 140M Plantae Welwitschia – 112M Plantae Osmundastrum cinnamomeum Cinnamon fern 70M Plantae Trapa natans Water caltrop nuts 66M Plantae Nelumbo lutea, Nelumbo nucifera Lotus 65M+ Note: I’m a hobbyist, and not a paleobotanist. Additions and edits are welcome , if I misclassified or missed anything.

Source: Hacker News | Original Link

-fbounds-safety: Enforcing bounds safety for C

-fbounds-safety: Enforcing bounds safety for C — Clang 23.0.0git documentation Clang 23.0.0git documentation -fbounds-safety: Enforcing bounds safety for C « Allocation Tokens :: Contents :: Adoption Guide for -fbounds-safety » -fbounds-safety : Enforcing bounds safety for C ¶ Overview ¶ NOTE: This is a design document and the feature is not available for users yet. Please see Implementation plans for -fbounds-safety for more details. -fbounds-safety is a C extension to enforce bounds safety to prevent out-of-bounds (OOB) memory accesses, which remain a major source of security vulnerabilities in C. -fbounds-safety aims to eliminate this class of bugs by turning OOB accesses into deterministic traps. The -fbounds-safety extension offers bounds annotations that programmers can use to attach bounds to pointers. For example, programmers can add the __counted_by(N) annotation to parameter ptr , indicating that the pointer has N valid elements: void foo ( int * __counted_by ( N ) ptr , size_t N ); Using this bounds information, the compiler inserts bounds checks on every pointer dereference, ensuring that the program does not access memory outside the specified bounds. The compiler requires programmers to provide enough bounds information so that the accesses can be checked at either run time or compile time — and it rejects code if it cannot. The most important contribution of -fbounds-safety is how it reduces the programmer’s annotation burden by reconciling bounds annotations at ABI boundaries with the use of implicit wide pointers (a.k.a. “fat” pointers) that carry bounds information on local variables without the need for annotations. We designed this model so that it preserves ABI compatibility with C while minimizing adoption effort. The -fbounds-safety extension has been adopted on millions of lines of production C code and proven to work in a consumer operating system setting. The extension was designed to enable incremental adoption — a key requirement in real-world settings where modifying an entire project and its dependencies all at once is often not possible. It also addresses multiple of other practical challenges that have made existing approaches to safer C dialects difficult to adopt, offering these properties that make it widely adoptable in practice: It is designed to preserve the Application Binary Interface (ABI). It interoperates well with plain C code. It can be adopted partially and incrementally while still providing safety benefits. It is a conforming extension to C. Consequently, source code that adopts the extension can continue to be compiled by toolchains that do not support the extension (CAVEAT: this still requires inclusion of a header file macro-defining bounds annotations to empty). It has a relatively low adoption cost. This document discusses the key designs of -fbounds-safety . The document is subject to active updates with a more detailed specification. Programming Model ¶ Overview ¶ -fbounds-safety ensures that

Source: Hacker News | Original Link

America vs. Singapore: You Can’t Save Your Way Out of Economic Shocks

America vs. Singapore: You Can’t Save Your Way out of Economic Shocks Governance Cybernetics Subscribe Sign in America vs. Singapore: You Can’t Save Your Way out of Economic Shocks Saving regret has less to do with procrastination than we thought, and more to do with whether your country absorbs economic shocks or lets them hit your savings Dave Deek Feb 18, 2026 2 1 1 Share Key Facts Procrastination does not meaningfully predict saving regret. Across 12 psychometric measures tested in both countries, the relationship is weak to nonexistent, and where statistically significant, it frequently runs in the opposite direction from what the behavioral economics literature predicts. Economic shocks do. Exposure to negative financial shocks is the dominant predictor of wishing you’d saved more. About half of Americans between 60 and 74 wish they had saved more. That’s a familiar finding, and it comes with a familiar explanation: people procrastinate. They know they should save, they intend to save, and then they don’t, because the present is vivid and retirement is abstract, because inertia is powerful, because human beings are not the rational optimizers of the textbook. A generation of behavioral economics has crystallized around this idea. We get nudges, automatic enrollment in 401(k) plans, default escalation schedules. The policy apparatus assumes, at bottom, that under-saving is a self-control problem. A new working paper from Rohwedder, Hurd, and Börsch-Supan suggests we’ve been looking in the wrong place. The authors surveyed thousands of people aged 60–74 in the United States and Singapore, two countries that both emphasize individual responsibility for retirement but differ sharply in institutional design. They asked a simple question: if you could do it over, would you have saved more? Then they tested what actually predicts the answer. Is it procrastination? Or is it something else? The something else turns out to be economic shocks. And the difference is not subtle. Which is (depends on you, darkly or not so) funny, considering what a lot of people are saying about LLMs/AI/etc and the job market. This Substack is reader-supported. To receive new posts and support my work, consider becoming a free or paid subscriber. Subscribe Share What do you mean by “tested.” The authors didn’t just ask people whether they procrastinate and whether they regret their saving. They fielded 12 separate psychometric measures: questions about putting off tasks, giving up when things get difficult, settling for mediocre results, losing motivation, preferring immediate gratification. These are the kinds of instruments the behavioral literature treats as markers of present bias and poor self-control. The prediction, grounded in decades of work from Laibson, Thaler, O’Donoghue, Rabin, and others, is straightforward: people who score high on procrastination should be more likely to wish they’d saved more. They aren’t. Across both countries, across 21 separate statis

Source: Hacker News | Original Link

C++26: Std:Is_within_lifetime

C++26: std::is_within_lifetime | Sandor Dargo’s Blog Sandor Dargo’s Blog On C++, software development and books HOME TAGS ARCHIVES BOOKS SPEAKING DAILY C++ HI… Blog 2026 02 18 C++26: std::is_within_lifetime Post Cancel C++26: std::is_within_lifetime Sandor Dargo Feb 18 2026-02-18T00:00:00+01:00 4 min When I was looking for the next topic for my posts, my eyes stopped on std::is_within_lifetime . Dealing with lifetime issues is a quite common source of bugs, after all. Then I clicked on the link and I read Checking if a union alternative is active . I scratched my head. Is the link correct? It is — and it totally makes sense. Let’s get into the details and first check what P2641R4 is about. What does std::is_within_lifetime do? C++26 adds bool std::is_within_lifetime(const T* p) to the header. This function checks whether p points to an object that is currently within its lifetime during constant evaluation. The most common use case is checking which member of a union is currently active. Here’s a simple example: 1 2 3 4 5 6 7 8 9 10 11 12 union Storage { int i ; double d ; }; constexpr bool check_active_member () { Storage s ; s . i = 42 ; // At this point, ‘i’ is the active member return std :: is_within_lifetime ( & s . i ); // returns true } In this example, after assigning to s.i , that member becomes active. The function std::is_within_lifetime(&s.i) returns true , confirming that i is within its lifetime. If we checked std::is_within_lifetime(&s.d) at this point, it would return false since d is not the active member. Properties and the name The function has some interesting design choices that are worth discussing. It’s consteval only std::is_within_lifetime is consteval , meaning it can only be used during compile-time. You cannot call it at runtime. This might seem limiting, but it’s actually by design. The purpose of this function is to solve problems that exist specifically in the constant evaluation world. At runtime, you have other mechanisms available like tracking state with additional variables. The compiler doesn’t maintain the same level of lifetime tracking information at runtime that it does during constant evaluation. Why a pointer instead of a reference? The function takes a pointer rather than a reference, which might seem unusual for a query operation. The reasoning is straightforward: passing by reference can introduce complications with temporary objects and lifetime extension rules. A pointer makes the intent explicit — you’re asking about a specific memory location, not about a value or a reference that might be bound to various things. It’s a cleaner semantic fit for what the function actually does. Why not “ is_union_member_active ”? You might wonder why the feature has such a general name when the primary use case is specifically about unions. The answer is that the committee chose to solve the problem at a more fundamental level. Instead of adding a union-specific check, they provided a general mech

Source: Hacker News | Original Link

Show HN: Mini-Diarium – An encrypted, local, cross-platform journaling app

GitHub – fjrevoredo/mini-diarium: An encrypted local cross-platform journaling app Skip to content You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert fjrevoredo / mini-diarium Public Notifications You must be signed in to change notification settings Fork 1 Star 41 An encrypted local cross-platform journaling app License MIT license 41 stars 1 fork Branches Tags Activity Star Notifications You must be signed in to change notification settings fjrevoredo/mini-diarium master Branches Tags Go to file Code Open more actions menu Folders and files Name Name Last commit message Last commit date Latest commit History 13 Commits 13 Commits .agents/ skills .agents/ skills .claude .claude .github .github .vscode .vscode docs docs public public scripts scripts src-tauri src-tauri src src .gitignore .gitignore .prettierignore .prettierignore .prettierrc.json .prettierrc.json AGENTS.md AGENTS.md CHANGELOG.md CHANGELOG.md CLAUDE.md CLAUDE.md CODE_OF_CONDUCT.md CODE_OF_CONDUCT.md CONTRIBUTING.md CONTRIBUTING.md LICENSE LICENSE README.md README.md REQUIREMENTS.md REQUIREMENTS.md SECURITY.md SECURITY.md bump-version.ps1 bump-version.ps1 bump-version.sh bump-version.sh bun.lock bun.lock eslint.config.js eslint.config.js index.html index.html package.json package.json tsconfig.json tsconfig.json tsconfig.node.json tsconfig.node.json uno.config.ts uno.config.ts vite.config.ts vite.config.ts vitest.config.ts vitest.config.ts View all files Repository files navigation Mini Diarium An encrypted, local cross-platform journaling app Mini Diarium keeps your journal private. Every entry is encrypted with AES-256-GCM before it touches disk, the app never connects to the internet, and your data never leaves your machine. Built with Tauri, SolidJS, and Rust. Background Mini Diarium is a spiritual successor to Mini Diary by Samuel Meuli. I loved the original tool. It was simple, private, and did exactly what a journal app should do. Unfortunately, it’s been unmaintained for years and its dependencies have aged out. I initially thought about forking it and modernizing the stack, but turned out impractical. So I started over from scratch, keeping the same core philosophy (encrypted, local-only, minimal) while rebuilding completely with Tauri 2, SolidJS, and Rust. The result is a lighter, faster app with stronger encryption and a few personal touches. Features Key file authentication : unlock your diary with an X25519 private key file instead of (or alongside) your password, like SSH keys for your journal. Register multiple key files; manage all auth methods from Preferences. See Key File Authentication for details. AES-256-GCM encryption : all entries are encrypted with a random master key. Each auth method holds its own wrapped copy of that key, so adding or removing a method is O(1), with n

Source: Hacker News | Original Link

Pebble Production: February Update

February Pebble Production and Software Updates | rePebble Blog # Mega update on Pebble Time 2, Pebble Round 2 and Index 01 Things are busy in Pebbleland! We’re getting close to shipping 3 new hardware products and all the associated software that comes along with them. Overall, things feel good. I’d say the amount of last minute shenanigans is at the normal amount. Getting new hardware into ‘production’ is a pretty wild and exciting process. Building hardware is an exercise in balancing competing priorities of cost, quality and speed. In the last mile push to get into production, things can change quickly for the best (woohoo! the waterproof test finally passes, we can move to the next stage), or less good (uh, the production line needs 3 more test fixtures to test Index 01 mic performance, and a major production test software update…that’ll be a lot more money). Unlike with software, you can’t easily fix hardware issues after you ship! Making these last minute decisions is sometimes pretty stressful but hey, that’s the world of making hardware. # Pebble Time 2 Production Update We’re in the Production Verification Test (PVT) phase right now, the last stop before Mass Production (MP). During this phase we manufactured hundreds of PT2s in a series of test builds, uncovered a bunch of issues, and fixed a bunch of issues. Just before the factories shut down for the lunar New Year, we got the good news that all the tests passed on the last build! We focused most of January on improving the waterproofing on the watch (flash back to last summer when we worked on this for Pebble 2 Duo!). I traveled to visit the factory ( travelogue here ) and worked through a lot of open issues. Above is a video of the speaker waterproof testing from the production line. Good news is that we fixed all the issues, tests are passing and it looks like we’ll be able to certify PT2 with a waterproof rating of 30m or 3ATM! This means you can get your watch wet, wear it while swimming (but not in hot tubs/saunas) and generally not worry about it. It’s not a dive watch, though. Also, don’t expose it to hot water (this could weaken the waterproof seals), or high pressure water. It’s not invincible. Entering PT2 Mass Production on March 9 Snapshot of our mass production plan (output counts are cumulative) The factory is closed now for Lunar New Year and will reopen around the end of Feb. As of today, mass production is scheduled to start on March 9. It will take the production line a little while to spin up towards our target output of 500 watches per day. Finished watches ship from the factory once a week to our distribution center (which takes ~1 week), then get packed for shipping (a few days to a week), then get delivered to you (~7-10 days). These dates and estimates are ALL subject to change – if we run into a problem, production shuts down until we fix it. Delays can and most likely will happen. What everyone’s been waiting for…when will your PT2 arrive 🙂 Based on current

Source: Hacker News | Original Link

Coding Tricks Used in the C64 Game Seawolves

9 Exotic Coding Tricks used in the C64 Game, Seawolves 9 Exotic Coding Tricks used in the C64 Game, Seawolves Updated on: 15th July 2025 Introduction With the release of my first ever commercial game on the Commodore 64, Seawolves , I thought it might be of interest to the coders among you as to how the game was constructed. From the outset, brace yourself to read about some “code less travelled”, as the game required several strange or quirky methods that are perhaps more associated with the madness that goes in the demo scene. NMIs + IRQs running in sync. Real-time torpedoes thanks to “splites”. Real-time implosion animations. Real-time ocean waves. Real-time water distortion effects. FLD shift + upward correction. GFX stream-ins. Quick logic. Branch-jumping. Let’s check them out in turn. #1: NMIs + IRQs Running in Synchronisation I first combined NMIs and IRQs inside a game environment in Parallaxian and again in The Wild Wood , to great effect, because it offers the following benefits: You can easily interrupt long tasks in a raster IRQ (IRST) with the NMI to perform short scanline-exact tasks without the need for nesting IRQs; it’s simpler and more elegant. It can be used as a safety net to minimise the effects of raster stall events, in which freak load conditions on an IRQ (say a 1 in 1000 alignment of circumstances) would cause the IRQ schema to stall / collapse for a screen refresh frame before recovering. In this case, the final NMI handler, rather than the last IRQ handler, sets the IRQ pointers / vectors for the top-of-the-screen IRQ. This way, if (for example), IRQ handler #3 out of 7 IRQ handlers stalls, the contagion only spread to the bottom of the screen before normal service is resumed at the top during the next frame. Otherwise, the stall effects would not recover until the next time IRQ handler #3 fires. NMIs, being timer interrupts, can be set up to trigger at pretty much any cycle along a scanline, making them more efficient in terms of managing raster time than IRSTs are; typically with a raster IRQ, you will have to carefully place NOPs or otherwise juggle code before changing a register (e.g. changing background colour) and that’s after you have lost time In stabilising the IRST, whereas with NMIs, you have better control over where on the desired scanline they fire. If the foregoing sounds horrendous and esoteric, I can only apologise for making it thus through poor explanation skills, but really, it boils down to giving the developer a more coder-friendly way of slicing the screen up into horizontal layers that collectively form a useful game environment. NMIs are timer interrupts, meaning that unlike IRQs, they can’t be triggered by $D012 on the VIC-II chip, but instead are controlled by either of the two timers on CIA chip #2 (likewise, timer IRQs can be set up using either of the 2 timers on CIA #1). The timers hold the number of cycles between each NMI instance in the form of a lo-byte, hi-byte 16-bit number stored

Source: Hacker News | Original Link

Bridging Elixir and Python with Oban

Bridging Elixir and Python with Oban · Oban Pro ← All Articles Bridging Elixir and Python with Oban February 3, 2026 elixir python tutorial What choices lay before you when your Elixir app needs functionality that only exists, or is more mature, in Python? There are machine learning models, PDF rendering libraries, and audio/video editing tools without an Elixir equivalent (yet). You could piece together some HTTP calls, or bring in a message queue…but there’s a simpler path through Oban. Whether you’re enabling disparate teams to collaborate, gradually migrating from one language to another, or leveraging packages that are lacking in one ecosystem, having a mechanism to transparently exchange durable jobs between Elixir and Python opens up new possibilities. On that tip, let’s build a small example to demonstrate how trivial bridging can be. We’ll call it “Badge Forge”. Forging Badges “Badge Forge,” like ” Fire Saga ” before it, is a pair of nouns that barely describes what our demo app does. But, it’s balanced and why hold back on the whimsy? More concretely, we’re building a micro app that prints conference badges. The actual PDF generation happens through WeasyPrint , a Python library that turns HTML and CSS into print-ready documents. It’s mature and easy to use. For the purpose of this demo, we’ll pretend that running ChromaticPDF is unpalatable and Typst isn’t available. There’s no web framework involved, just command-line output and job processing. Don’t fret, we’ll bring in some visualization later. Sharing a Common Database Some say you’re cra-zay for sharing a database between applications. We say you’re already willing to share a message queue, and now the database is your task broker, so why not? It’s happening. Oban for Python was designed for interop with Elixir from the beginning. Both libraries read and write to the same oban_jobs table, with job args stored as JSON, so they’re fully language-agnostic. When an Elixir app enqueues a job destined for a Python worker (or vice versa), it simply writes a row. The receiving side picks it up based on the queue name, processes it, and updates the status. That’s the whole mechanism: Each side maintains its own cluster leadership, so an Elixir node and a Python process won’t compete for leader responsibilities. They coordinate through the jobs table, but take care of business independently. Both sides can also exchange PubSub notifications through Postgres for real-time coordination. The importance of that tidbit will become clear soon enough. Printing in Action This is more of a demonstration than a tutorial. We don’t expect you to build along, but we hope you’ll see how little code it takes to form a bridge. With a wee config in place and both apps pointing at the same database, we can start generating badges. Enqueueing Jobs Generation starts on the Elixir side. This function enqueues a batch of (fake) jobs destined for the Python worker: def enqueue_batch ( count \\ 100 ) do generate

Source: Hacker News | Original Link

harvard-edge/cs249r_book – Introduction to Machine Learning Systems

GitHub – harvard-edge/cs249r_book: Introduction to Machine Learning Systems Skip to content You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert harvard-edge / cs249r_book Public Notifications You must be signed in to change notification settings Fork 2.3k Star 19.9k Introduction to Machine Learning Systems mlsysbook.ai/ License View license 19.9k stars 2.3k forks Branches Tags Activity Star Notifications You must be signed in to change notification settings harvard-edge/cs249r_book dev Branches Tags Go to file Code Open more actions menu Folders and files Name Name Last commit message Last commit date Latest commit History 9,525 Commits 9,525 Commits .github .github .vale/ styles/ textbook .vale/ styles/ textbook README README _brand _brand binder binder book book kits kits labs labs tinytorch tinytorch .all-contributorsrc .all-contributorsrc .codespell-ignore-words.txt .codespell-ignore-words.txt .envrc .envrc .gitignore .gitignore .nojekyll .nojekyll .pre-commit-config.yaml .pre-commit-config.yaml .yamllint .yamllint CITATION.bib CITATION.bib CNAME CNAME LICENSE.md LICENSE.md README.md README.md pyproject.toml pyproject.toml requirements.txt requirements.txt View all files Repository files navigation Machine Learning Systems Principles and Practices of Engineering Artificially Intelligent Systems English • 中文 • 日本語 • 한국어 📖 Read Online • Tiny🔥Torch • 📄 Download PDF • 📓 Download EPUB • 🌐 Explore Ecosystem 📚 Hardcopy edition coming 2026 with MIT Press. Mission The world is rushing to build AI systems. It is not engineering them. That gap is what we mean by AI engineering. AI engineering is the discipline of building efficient, reliable, safe, and robust intelligent systems that operate in the real world, not just models in isolation. Our mission: Establish AI engineering as a foundational discipline, alongside software engineering and computer engineering, by teaching how to design, build, and evaluate end to end intelligent systems. The long term impact of AI will be shaped by engineers who can turn ideas into working, dependable systems. What’s in this repo This repository is the open learning stack for AI systems engineering. It includes the textbook source, TinyTorch, hardware kits, and upcoming co-labs that connect principles to runnable code and real devices. Start Here Choose a path based on your goal. READ Start with the textbook . Try Chapter 1 and the Benchmarking chapter . BUILD Start TinyTorch with the getting started guide . Begin with Module 01 and work up from CNNs to transformers and the MLPerf benchmarks. DEPLOY Pick a hardware kit and run the labs on Arduino, Raspberry Pi, and other edge devices. CONNECT Say hello in Discussions . We will do our best to reply. The Learning Stack The learning stack below shows how the textbook connects to hands

Source: GitHub Trending | Original Link