Show HN: An encrypted, local, cross-platform journaling app

GitHub – fjrevoredo/mini-diarium: An encrypted local cross-platform journaling app Skip to content You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert fjrevoredo / mini-diarium Public Notifications You must be signed in to change notification settings Fork 0 Star 0 An encrypted local cross-platform journaling app License MIT license 0 stars 0 forks Branches Tags Activity Star Notifications You must be signed in to change notification settings fjrevoredo/mini-diarium master Branches Tags Go to file Code Open more actions menu Folders and files Name Name Last commit message Last commit date Latest commit History 13 Commits 13 Commits .agents/ skills .agents/ skills .claude .claude .github .github .vscode .vscode docs docs public public scripts scripts src-tauri src-tauri src src .gitignore .gitignore .prettierignore .prettierignore .prettierrc.json .prettierrc.json AGENTS.md AGENTS.md CHANGELOG.md CHANGELOG.md CLAUDE.md CLAUDE.md CODE_OF_CONDUCT.md CODE_OF_CONDUCT.md CONTRIBUTING.md CONTRIBUTING.md LICENSE LICENSE README.md README.md REQUIREMENTS.md REQUIREMENTS.md SECURITY.md SECURITY.md bump-version.ps1 bump-version.ps1 bump-version.sh bump-version.sh bun.lock bun.lock eslint.config.js eslint.config.js index.html index.html package.json package.json tsconfig.json tsconfig.json tsconfig.node.json tsconfig.node.json uno.config.ts uno.config.ts vite.config.ts vite.config.ts vitest.config.ts vitest.config.ts View all files Repository files navigation Mini Diarium An encrypted, local cross-platform journaling app Mini Diarium keeps your journal private. Every entry is encrypted with AES-256-GCM before it touches disk, the app never connects to the internet, and your data never leaves your machine. Built with Tauri, SolidJS, and Rust. Background Mini Diarium is a spiritual successor to Mini Diary by Samuel Meuli. I loved the original tool. It was simple, private, and did exactly what a journal app should do. Unfortunately, it’s been unmaintained for years and its dependencies have aged out. I initially thought about forking it and modernizing the stack, but turned out impractical. So I started over from scratch, keeping the same core philosophy (encrypted, local-only, minimal) while rebuilding completely with Tauri 2, SolidJS, and Rust. The result is a lighter, faster app with stronger encryption and a few personal touches. Features Key file authentication : unlock your diary with an X25519 private key file instead of (or alongside) your password, like SSH keys for your journal. Register multiple key files; manage all auth methods from Preferences. See Key File Authentication for details. AES-256-GCM encryption : all entries are encrypted with a random master key. Each auth method holds its own wrapped copy of that key, so adding or removing a method is O(1), with no

Source: Hacker News | Original Link

open-mercato/open-mercato – AI‑supportive CRM / ERP foundation framework — built to power R&D, new processes, operations, and growth. It’s modular, extensible, and designed for teams that want strong defaults with room to customize everything. Better than Django, Retool and other alternatives – and Enterprise Grade!

GitHub – open-mercato/open-mercato: AI‑supportive CRM / ERP foundation framework — built to power R&D, new processes, operations, and growth. It’s modular, extensible, and designed for teams that want strong defaults with room to customize everything. Better than Django, Retool and other alternatives – and Enterprise Grade! Skip to content You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert open-mercato / open-mercato Public Notifications You must be signed in to change notification settings Fork 97 Star 527 AI‑supportive CRM / ERP foundation framework — built to power R&D, new processes, operations, and growth. It’s modular, extensible, and designed for teams that want strong defaults with room to customize everything. Better than Django, Retool and other alternatives – and Enterprise Grade! docs.openmercato.com License MIT license 527 stars 97 forks Branches Tags Activity Star Notifications You must be signed in to change notification settings open-mercato/open-mercato main Branches Tags Go to file Code Open more actions menu Folders and files Name Name Last commit message Last commit date Latest commit History 1,366 Commits 1,366 Commits .ai .ai .github .github .vscode .vscode apps apps certs certs config/ verdaccio config/ verdaccio data data docker docker packages packages scripts scripts tests/ helpers tests/ helpers .dockerignore .dockerignore .gitignore .gitignore .mcp.json.example .mcp.json.example .npmrc.local .npmrc.local .yarnrc.yml .yarnrc.yml AGENTS.md AGENTS.md CHANGELOG.md CHANGELOG.md CLAUDE.md CLAUDE.md CONTRIBUTING.md CONTRIBUTING.md Dockerfile Dockerfile LICENSE LICENSE README.md README.md RELEASE_NOTES.md RELEASE_NOTES.md SECURITY.md SECURITY.md docker-compose.fullapp.dev.yml docker-compose.fullapp.dev.yml docker-compose.fullapp.yml docker-compose.fullapp.yml docker-compose.yml docker-compose.yml eslint.config.mjs eslint.config.mjs jest.config.cjs jest.config.cjs jest.dom.setup.ts jest.dom.setup.ts jest.setup.ts jest.setup.ts newrelic.js newrelic.js package.json package.json tsconfig.base.json tsconfig.base.json tsconfig.json tsconfig.json turbo.json turbo.json yarn.lock yarn.lock View all files Repository files navigation Open Mercato Open Mercato is a new‑era, AI‑supportive platform for shipping enterprise‑grade CRMs, ERPs, and commerce backends. It’s modular, extensible, and designed so teams can mix their own modules, entities, and workflows while keeping the guardrails of a production-ready stack. Start with 80% done. Buy vs. build? Now, you can have best of both. Use Open Mercato enterprise ready business features like CRM, Sales, OMS, Encryption and build the remaining 20% that really makes the difference for your business. Core Use Cases 💼 CRM – model customers, opportunities, and bespoke workflows with infinitely flexible data defi

Source: GitHub Trending | Original Link

Don’t Trust the Salt: AI Summarization, Multilingual Safety, and LLM Guardrails

Don’t Trust the Salt: AI Summarization, Multilingual Safety, and the LLM Guardrails That Need Guarding Humane AI Subscribe Sign in Don’t Trust the Salt: AI Summarization, Multilingual Safety, and Evaluating LLM Guardrails Roya Pakzad Feb 16, 2026 4 Share “The devil is in the details,” they say. And so is the beauty, the thinking, the “but …”. Maybe that’s why the phrase “elevator pitch” gives me a shiver. It might have started back at AMD, when I was a young, aspiring engineer, joining every “Women in This or That” club I could find. I was searching for the feminist ideas I’d first found among women’s rights activists in Iran — hoping to see them alive in “lean in”-era corporate America. Naive, I know. Later, as I ventured through academic papers and policy reports, I discovered the world of Executive Summaries and Abstracts. I wrote many, and read many, and I always knew that if I wanted to actually learn, digest, challenge, and build on a paper, I needed to go to the methodology section, to limitations, footnotes, appendices. That, I felt, was how I should train my mind to do original work. Subscribe Share Interviewing is also a big part of my job at Taraaz , researching social and human rights impacts of digital technologies including AI. Sometimes, from an hour of conversation, the most important finding is just one sentence. Or it’s the silence between sentences: a pause, then a longer pause. That’s sometimes what I want from an interview — not a perfectly written summary of “Speaker A” and “Speaker B” with listed main themes. If I wanted those, I would run a questionnaire, not an interview. I’m not writing to dismiss AI-generated summarization tools. I know there are many benefits. But if your job as a researcher is to bring critical thinking, subjective understanding, and a novel approach to your research, don’t rely on them. And here’s another reason why: Last year at Mozilla Foundation, I had the opportunity to go deep on evaluating large language models. I built multilingual AI evaluation tools and ran experiments. But summarization kept nagging at me. It felt like a blind spot in the AI evaluation world. Let me show you an example from the tool I made last year. Project 1: Bilingual Shadow Reasoning The three summaries below come from the same source document, “ Report of the Special Rapporteur on the situation of human rights in the Islamic Republic of Iran, Mai Sato ,” generated by the same model (OpenAI GPT-OSS-20B) at the same time. The only difference is the instruction used to steer the model’s reasoning. This was part of my submission for the OpenAI’s GPT-OSS-20B Red Teaming Challenge , where I introduced a method I call Bilingual Shadow Reasoning . The technique steers a model’s hidden chain-of-thought through customized “ deliberative ” (non-English) policies, making it possible to bypass safety guardrails and evade audits, all while the output appears neutral and professional on the surface. For this work I define a policy as

Source: Hacker News | Original Link

I made ChatGPT and Google tell I’m a competitive hot-dog-eating world champion

@thomasgermain.bsky.social on Bluesky JavaScript Required This is a heavily interactive web application, and JavaScript is required. Simple HTML interfaces are possible, but that is not what this is. Learn more about Bluesky at bsky.social and atproto.com . Post Thomas Germain thomasgermain.bsky.social did:plc:b7vj6yi6zy27rfvsqgbl224w I just did the dumbest thing of my entire career to prove a much more serious point. I tricked ChatGPT and Google, and made them tell other users I’m a competitive hot-dog-eating world champion People are using this trick on a massive scale to make AI tell you lies. I’ll explain how I did it 2026-02-18T16:37:57.855Z

Source: Hacker News | Original Link

Step 3.5 Flash – Open-source foundation model, supports deep reasoning at speed

Step 3.5 Flash: Fast Enough to Think. Reliable Enough to Act. 2026-02-12 updated · StepFun Step 3.5 Flash | GitHub HuggingFace Tech Report New ModelScope OpenClaw Guidance 🔥Hot Score 82 80 78 76 81.0 Step 3.5 Flash Params (B) 196 Avg Score 81.0 78.5 GLM-4.7 Params (B) 355 Avg Score 78.5 77.3 DeepSeek V3.2 Params (B) 671 Avg Score 77.3 80.5 Kimi K2.5 Params (B) 1000 Avg Score 80.5 80.7 Gemini 3.0 Pro Params (B) Unknown Avg Score 80.7 80.6 Claude Opus 4.5 Params (B) Unknown Avg Score 80.6 82.2 GPT-5.2 xhigh Params (B) Unknown Avg Score 82.2 200 400 600 800 1000 Likely >1000 Total Model Parameters (B) Scores represent the mean of the following eight benchmarks listed below, excluding xbench-DeepSearch. The Step 3.5 Flash score is derived under standard settings (i.e., $w/o$ Parallel Thinking). Step 3.5 Flash is our most capable open-source foundation model, engineered to deliver frontier reasoning and agentic capabilities with exceptional efficiency. Built on a sparse Mixture of Experts (MoE) architecture, it selectively activates only 11B of its 196B parameters per token. This “intelligence density” allows it to rival the reasoning depth of top-tier proprietary models, while maintaining the agility required for real-time interaction. Deep Reasoning at Speed: While chatbots are built for reading, agents must reason fast. Powered by 3-way Multi-Token Prediction (MTP-3), Step 3.5 Flash achieves a generation throughput of 100–300 tok/s in typical usage (peaking at 350 tok/s for single-stream coding tasks). This allows for complex, multi-step reasoning chains with immediate responsiveness. A Robust Engine for Coding & Agents: Step 3.5 Flash is purpose-built for agentic tasks, integrating a scalable RL framework that drives consistent self-improvement. It achieves 74.4% on SWE-bench Verified and 51.0% on Terminal-Bench 2.0, proving its ability to handle sophisticated, long-horizon tasks with unwavering stability. Efficient Long Context: The model supports a cost-efficient 256K context window by employing a 3:1 Sliding Window Attention (SWA) ratio—integrating three SWA layers for every one full-attention layer. This hybrid approach ensures consistent performance across massive datasets or long codebases while significantly reducing the computational overhead typical of standard long-context models. Accessible Local Deployment: Optimized for accessibility, Step 3.5 Flash brings elite-level intelligence to local environments. It runs securely on high-end consumer hardware (e.g., Mac Studio M4 Max, NVIDIA DGX Spark), ensuring data privacy without sacrificing performance. Reasoning AIME 2025 100 90 80 97.3 Step 3.5 Flash Score: 97.3 Params: 196B 99.9 Step 3.5 Flash (PaCoRe) Score: 99.9 Params: 196B 95.7 GLM-4.7 Score: 95.7 Params: 355B 93.1 DeepSeek V3.2 Score: 93.1 Params: 671B 96.1 Kimi K2.5 Score: 96.1 Params: 1T 95.0 Gemini 3.0 Pro Score: 95.0 Params: Unknown 92.8 Claude Opus 4.5 Score: 92.8 Params: Unknown 100.0 GPT-5.2 xhigh Score: 100.0 Params: Unknown

Source: Hacker News | Original Link

Andrew Mountbatten-Windsor arrested on suspicion of misconduct in public office

Andrew Mountbatten-Windsor arrested on suspicion of misconduct in public office – BBC News Live . 111,072 viewing 111072 viewing Live page Updated 2 minutes ago Andrew Mountbatten-Windsor arrested on suspicion of misconduct in public office Andrew Mountbatten-Windsor Arrested BBC News Close To play this video you need to enable JavaScript in your browser. This video can not be played Watch live Summary Andrew Mountbatten Windsor has been arrested on suspicion of misconduct in public office Police say he is in custody and officers are carrying out searches at addresses in Berkshire and Norfolk – read the police statement in full Photos show cars arriving at the Sandringham Estate in Norfolk earlier this morning It comes after Thames Valley Police said they were assessing a complaint over the alleged sharing of confidential material by the former prince with late sex offender Jeffrey Epstein Andrew, who turns 66 today, has consistently and strenuously denied any wrongdoing This is a breaking story, we’ll bring you more shortly Live Reporting Edited by Marita Moloney and Dulcie Lee We’ve now seen a statement from Thames Valley police, which says: “As part of the investigation, we have today (19/2) arrested a man in his sixties from Norfolk on suspicion of misconduct in public office and are carrying out searches at addresses in Berkshire and Norfolk. “The man remains in police custody at this time. “We will not be naming the arrested man, as per national guidance. Please also remember that this case is now active so care should be taken with any publication to avoid being in contempt of court.” Assistant Chief Constable Oliver Wright said: “Following a thorough assessment, we have now opened an investigation into this allegation of misconduct in public office. “It is important that we protect the integrity and objectivity of our investigation as we work with our partners to investigate this alleged offence. “We understand the significant public interest in this case, and we will provide updates at the appropriate time.” Share close panel Share page Copy link About sharing At the moment, we still don’t know which force has arrested Andrew Mountbatten-Windsor, but vehicles – believed to be unmarked police cars – were seen earlier this morning at Sandringham in Norfolk, where he has been living since leaving his home in Windsor. This is the first time the former prince, who has faced numerous allegations over this links to the convicted sex offender Jeffrey Epstein, has been arrested. Andrew has consistently and strenuously denied any wrongdoing. BBC News understand he was arrested on suspicion of misconduct in public office. We do not know the particulars of this arrest but as soon as we have more detail we will bring it to you. Share close panel Share page Copy link About sharing Lucy Manning Special correspondent Andrew Mountbatten-Windsor has been arrested on suspicion of misconduct in public office, the BBC understands. Share close panel Share pag

Source: Hacker News | Original Link

Step 3.5 Flash: Fast Enough to Think. Reliable Enough to Act

Step 3.5 Flash: Fast Enough to Think. Reliable Enough to Act. 2026-02-12 updated · StepFun Step 3.5 Flash | GitHub HuggingFace Tech Report New ModelScope OpenClaw Guidance 🔥Hot Score 82 80 78 76 81.0 Step 3.5 Flash Params (B) 196 Avg Score 81.0 78.5 GLM-4.7 Params (B) 355 Avg Score 78.5 77.3 DeepSeek V3.2 Params (B) 671 Avg Score 77.3 80.5 Kimi K2.5 Params (B) 1000 Avg Score 80.5 80.7 Gemini 3.0 Pro Params (B) Unknown Avg Score 80.7 80.6 Claude Opus 4.5 Params (B) Unknown Avg Score 80.6 82.2 GPT-5.2 xhigh Params (B) Unknown Avg Score 82.2 200 400 600 800 1000 Likely >1000 Total Model Parameters (B) Scores represent the mean of the following eight benchmarks listed below, excluding xbench-DeepSearch. The Step 3.5 Flash score is derived under standard settings (i.e., $w/o$ Parallel Thinking). Step 3.5 Flash is our most capable open-source foundation model, engineered to deliver frontier reasoning and agentic capabilities with exceptional efficiency. Built on a sparse Mixture of Experts (MoE) architecture, it selectively activates only 11B of its 196B parameters per token. This “intelligence density” allows it to rival the reasoning depth of top-tier proprietary models, while maintaining the agility required for real-time interaction. Deep Reasoning at Speed: While chatbots are built for reading, agents must reason fast. Powered by 3-way Multi-Token Prediction (MTP-3), Step 3.5 Flash achieves a generation throughput of 100–300 tok/s in typical usage (peaking at 350 tok/s for single-stream coding tasks). This allows for complex, multi-step reasoning chains with immediate responsiveness. A Robust Engine for Coding & Agents: Step 3.5 Flash is purpose-built for agentic tasks, integrating a scalable RL framework that drives consistent self-improvement. It achieves 74.4% on SWE-bench Verified and 51.0% on Terminal-Bench 2.0, proving its ability to handle sophisticated, long-horizon tasks with unwavering stability. Efficient Long Context: The model supports a cost-efficient 256K context window by employing a 3:1 Sliding Window Attention (SWA) ratio—integrating three SWA layers for every one full-attention layer. This hybrid approach ensures consistent performance across massive datasets or long codebases while significantly reducing the computational overhead typical of standard long-context models. Accessible Local Deployment: Optimized for accessibility, Step 3.5 Flash brings elite-level intelligence to local environments. It runs securely on high-end consumer hardware (e.g., Mac Studio M4 Max, NVIDIA DGX Spark), ensuring data privacy without sacrificing performance. Reasoning AIME 2025 100 90 80 97.3 Step 3.5 Flash Score: 97.3 Params: 196B 99.9 Step 3.5 Flash (PaCoRe) Score: 99.9 Params: 196B 95.7 GLM-4.7 Score: 95.7 Params: 355B 93.1 DeepSeek V3.2 Score: 93.1 Params: 671B 96.1 Kimi K2.5 Score: 96.1 Params: 1T 95.0 Gemini 3.0 Pro Score: 95.0 Params: Unknown 92.8 Claude Opus 4.5 Score: 92.8 Params: Unknown 100.0 GPT-5.2 xhigh Score: 100.0 Params: Unknown

Source: Hacker News | Original Link

European Tech Alternatives

EU Tech Map | European tech alternatives EU Tech Map – Discover European Tech Alternatives Find GDPR-compliant, EU-hosted software and service alternatives that respect your data sovereignty. Browse 500+ European companies across 30+ categories. Popular Categories Cloud Computing CRM Cybersecurity Analytics Email Cloud Storage Communication Project Management E-commerce AI & Machine Learning Document Collaboration Identity & Access Popular Alternatives Google Analytics Alternatives AWS Alternatives Salesforce Alternatives Slack Alternatives Microsoft 365 Alternatives Zoom Alternatives Dropbox Alternatives Mailchimp Alternatives HubSpot Alternatives Jira Alternatives Browse by Country Germany France Netherlands Sweden Finland Spain Ireland Austria Switzerland Denmark Poland Italy About EU Tech Map EU Tech Map is the leading directory of European software companies and GDPR-compliant alternatives. We help businesses find trustworthy, privacy-respecting technology solutions hosted in Europe. About Us FAQ Pricing Contact Privacy Policy Terms of Service Imprint

Source: Hacker News | Original Link

15 years of FP64 segmentation, and why the Blackwell Ultra breaks the pattern

Fifteen Years of FP64 Segmentation, and Why the Blackwell Ultra Breaks the Pattern – Nicolas Dickenmann Back to blog Fifteen Years of FP64 Segmentation, and Why the Blackwell Ultra Breaks the Pattern February 18, 2026 Buy an RTX 5090, the fastest consumer GPU money can buy, and you get 104.8 TFLOPS of FP32 compute. Ask it to do double-precision math and you get 1.64 TFLOPS. That 64:1 gap is not a technology limitation. For fifteen years, the FP64:FP32 ratio has been slowly getting wider on consumer GPUs, widening the divide between consumer and enterprise silicon. Now the AI boom is quietly dismantling that logic. The Evolution of FP64 on Nvidia GPUs The FP64:FP32 ratio on Nvidia consumer GPUs has degraded consistently since the Fermi architecture debuted in 2010. On Fermi, the GF100 die shipped to both GeForce and Tesla lines; the hardware supported 1:2 FP64:FP32, but GeForce cards were driver-capped to 1:8. 1 Over time, Nvidia moved away from “artificially” lowering FP64 performance on consumer GPUs. Instead, the architectural split became structural; the hardware itself is fundamentally different across product tiers. While datacenter GPUs have consistently kept a 1:2 or 1:3 FP64:FP32 performance (until the recent AI boom, more on that later), the performance ratio on consumer GPUs has consistently gotten worse. From 1:8 on the Fermi architecture in 2010 to 1:24 on Kepler in 2012 to 1:32 in 2014 to our final 1:64 ratio on Ampere in 2020. This effectively also means that over 15 years, from the GTX 480 in 2010 to the RTX 5090 in 2025 the FP64 performance on consumer GPUs only increased 9.65x from 0.17 TFLOPS to 1.64 TFLOPS, while in the same time range the FP32 performance improved a whopping 77.63x from 1.35 TFLOP to 104.8 TFLOP. FP32 vs FP64 throughput scaling across Nvidia GPU generations. 2 Nvidia’s Move to Segment the Market So why has FP64 performance on consumer GPUs progressively gotten weaker (in relation to FP32) while it stayed consistently strong on enterprise hardware? If this were purely a technical or cost constraint, you would expect the gap to be smaller. But since historically, Nvidia has taken deliberate steps to limit double-precision (FP64) throughput on GeForce cards, it makes it hard to argue this is accidental. The much simpler explanation is market segmentation. Most consumer workloads, such as gaming, 3d rendering, or video editing do not need FP64. High-performance computing on the other hand has long relied on double precision (FP64). Fields such as computational fluid dynamics, climate modeling, quantitative finance, and computational chemistry depend on numerical stability and precision that single precision (FP32) cannot always provide. So FP64 becomes a very convenient lever: weaken it on consumer GPUs, preserve it on enterprise versions, and you get a clean dividing line between markets. Nvidia has been fairly open about this. In the consumer Ampere GA102 whitepaper, they note “The small number of FP64 hardware

Source: Hacker News | Original Link

26 年研发环境预测,更累?

26 年研发环境预测,更累? – V2EX 首页 注册 登录 V2EX = way to explore V2EX 是一个关于分享和探索的地方 现在注册 已注册用户请 登录 • 请不要在回答技术问题时复制粘贴 AI 生成的内容 V2EX › 程序员 26 年研发环境预测,更累? YanSeven · 14 小时 34 分钟前 · 1946 次点击 虽然 v 友们或者大厂可能已经 vibe coding 很久了,但是估计还没有铺的很开。 今年隐约会有这样的一种趋势: 部分远离一线已久的中上层管理会更“折腾”人,尤其是中小公司的那种管理,他们在宣传和 PPT 中感觉 llm 已经差不多大成,可以很好的提效了。估计会下达一些”浮夸风“的命令,他们会认为以往预估在半个月~一个月的工作量项目可以压缩到两三天。 于是,他们会肆无忌惮的下达“亩产万斤”式的指令,并毫无顾忌的让一线员工使用和转换不熟悉的技术栈。快速迭代,快速上线。 管理 趋势 技术栈 21 条回复 • 2026-02-19 11:07:18 +08:00 1 7gugu 14 小时 28 分钟前 一定的,年初开完年会 ld 就说明了,过完年一定会加倍工作量。 2 HFX3389 14 小时 17 分钟前 AI 把“能做多少事”的上限抬高了,但没有改变“谁决定做多少事”。 归根到底还是分配制度的问题,谁来切蛋糕;谁来决定谁做多少事;谁来决定钱分多少;谁来决定工时 3 zhujinliang 14 小时 13 分钟前 我觉得对于后端太爽了,半个小时搞一个 DEMO 页面直接甩前端/App 端开发人员脸上,告诉他我的接口就是这样调用的,包没问题,别他妈来找我对接口。 4 think2011 14 小时 10 分钟前 @ zhujinliang 不用太开心,以后是全干工程师了 5 zuosiruan 14 小时 7 分钟前 via iPhone @ zhujinliang 有的公司前端都后端兼着干了 6 kneo 14 小时 0 分钟前 via Android 那就让 ai 多拉点,给领导一个教训。 7 JoeDH 13 小时 43 分钟前 4 去年年初我就开始用 vibe coding 全流程搞出来一个项目,领导提了一嘴让我把相关知识点整理出来做个分享,看看适合的话就部门买些 ai 账号。 后面被我混过去了,根本不想搞这个分享,原因: 1 、我不想跟同事分享这些东西,他们作为程序员如果这些最新趋势都不了解,那就是他们自己的问题 2 、如果我开了这个会,领导把整个部门的一些核心开发叫过来旁听,最后开始推广全部门 vibe coding 疯狂的赶项目效率,这个风气似乎因我而起,我不希望受到同事们的白眼 8 andforce 13 小时 28 分钟前 @ JoeDH #7 哎,我边上前端老哥已经开始劝阻分享了,我正在犹豫要不要给组内分享,理由和你差不多 9 YanSeven OP 13 小时 12 分钟前 @ JoeDH 我认同不公开分享,如果一个组织部门没有从上到下正确的认识 llm 能做什么,会带来什么,还是不要节外生枝的好。 10 PeiXyJ 13 小时 4 分钟前 @ zhujinliang 我已经让手下的前端工程师兼着做后端的任务了(CRUD 接口按照规定自己实现 11 levelworm 9 小时 19 分钟前 via iPhone 大厂已经开始整合了。估计今年整合完毕就要开始裁员。剩下的人任务不变,反正有 AI 帮忙嘛。 12 levelworm 9 小时 18 分钟前 via iPhone @ JoeDH 你就不应该让人知道。。。 13 levelworm 9 小时 13 分钟前 via iPhone @ andforce 请守住做人的底线。。。 14 BeiChuanAlex 8 小时 52 分钟前 不用过于担心,领导们还是喜欢手底下有人的,光杆司令谁干???这种权利的欲望还是本性,指挥 Ai 哪有指挥人过瘾 15 povsister 6 小时 32 分钟前 via iPhone 1 除非放弃质量管理,否则 ai 一定会受制于人的审阅速度。人的注意力是有限的,除非你打算把整个项目都丢给 ai 做,否则你根本不可能跟得上 ai 喷 token 的速度,即便引入 ai 测试评审也一样,你会觉得工作比之前更累 16 levelworm 4 小时 20 分钟前 via iPhone @ povsister #15 是的,同事一天就能做几百行的 Python ,我已经放弃审核了。 17 samy 4 小时 6 分钟前 这帮管理层就是白痴,懂个屁技术,就知道瞎指挥。真以为 AI 是万能的,两三天?他们自己来写代码试试! 18 musi 2 小时 1 分钟前 不用预测,看这个 /t/1192730 19 niub 2 小时 0 分钟前 老板年前在群里发话了,26 年开始,要换个方式考核研发。 20 106npo 1 小时 55 分钟前 via Android 这种两三天拉完的项目就不用管代码了,只管文档就行了 21 Timzzzzz 1 小时 41 分钟前 @ zhujinliang 小伙子脾气很大啊 关于 · 帮助文档 · 自助推广系统 · 博客 · API · FAQ · Solana · 1726 人在线 最高记录 6679 · Select Language 创意工作者们的社区 World is powered by solitude VERSION: 3.9.8.5 · 25ms · UTC 04:49 · PVG 12:49 · LAX 20:49 · JFK 23:49 ♥ Do have faith in what you’re doing. ❯

Source: V2EX | Original Link

运营转型,独立开发了一个月,我做了一个 AI 视频/AI 图片 生成工具,想听听大家的建议

运营转型,独立开发了一个月,我做了一个 AI 视频/AI 图片 生成工具,想听听大家的建议 – V2EX 首页 注册 登录 V2EX = way to explore V2EX 是一个关于分享和探索的地方 现在注册 已注册用户请 登录 爱意满满的作品展示区。 广告 V2EX › 分享创造 运营转型,独立开发了一个月,我做了一个 AI 视频/AI 图片 生成工具,想听听大家的建议 kafkaG · 18 小时 13 分钟前 · 956 次点击 各位 V 友好,我是 PopcornAI Art 的开发者,从运营转型为独立开发者。花了一个月的时间,终于把这个产品做出来了,想在这里分享一下开发历程,也希望能得到大家的反馈和建议。 一、为什么要做这个产品: 25 年年底,我开始接触 vibe coding ,并尝试用 vibe coding 开发了几个小工具,实现的效果很好,那一刻我的感觉就是,终于不用被束缚了,可以在没有团队,不用花很多钱的情况下,创造产品,并且让别人使用自己的产品了——能看到别人使用自己创造出来的产品,是一件很开心的事情,也是我的心愿。 从那以后,我一直在找方向,坦白讲,是有些拿锤子找钉子的感觉,期间也做了几个小产品:关于时间管理、任务规划等的,但都没有什么反响。 后来通过流量分析,发现 AI 视频生成+AI 图片生成领域的流量很大;再加上我老婆就是做设计和市场方向的,对于素材和视频设计在工作中是刚需,每个月也都要花几百块钱去订购会员,而且也经常不够用;还有就是广告一直以来都是很大的市场,其中视频和图片是广告的基础载体,现在的多模态 AI 的发展,也一定能够为整个广告相关的产业赋能,推动广告产品的进一步发展,同时自身在其中也会扮演越来越重要的作用。 总之就是:市场足够大、足够刚需、也一直向前发展。 当然,这样的机会,竞争也一定大, 但机会来了,先上车,才是最重要的,竞争力是在实战中成长起来的。 于是,我大概从 26 年元旦后的 1 月中旬,开始开发这个网站,到农历新年的第二天(2 月 18 号),终于开发完成。 二、 PopcornAI Art 能做什么: 目前产品包含了以下几个核心功能: 1 、参考生视频: 上传参考图(比如一个角色),AI 能保持角色一致性生成视频,适合做 IP 动画或系列内容。 2 、图生视频: 上传图片生成动态视频,支持多帧参考图引导。 3 、文生视频: 输入文字描述直接生成视频,支持 1-10 秒时长,最高 1080P 。 4 、图生图: 上传参考图,基于原图生成新图,保持主体一致性。 5 、文生图: 基于文字生成高质量图片,支持多种风格。 6 、特效模版: 目前积累了 100+视频模板,涵盖: Viral Dance ( viral 舞蹈效果) Product Ads (产品广告) Cinematic (电影感风格) Art Styles (艺术风格转换) Fun Transform (趣味变形) Holidays (节日主题) 等 三、一些开发心得: 作为独立开发者,这一个月遇到了一些坑,不过也都解决了,以下是分享几个感悟: 1 、奉行 MVP 核心原则 • 先完成再完美: 中间有小段时间浪费了,也为了解决一些不重要的困难和功能,甚至熬了通宵,但现在回头看来,其实没有太大必要,完全可以先上线,后续再改善:不在上线前追求极致的交互和美化,核心功能实现即可上站,后续根据流量数据再做迭代。 • 控制开发周期: 这个和上一条呼应:无论产品多大,MVP 开发周期应尽量控制在短时间内(如果非要有个明确的时间的话,我认为对于独立开发者来说,1~2 周就够了)。过长的周期说明需求拆解不够细,或陷入了“过度设计”的深渊。 2 、AI 驱动高效开发 • 充分利用 AI 工具: 熟练组合使用 Cursor 、Claude Code 、ChatGPT 、Gemini 等工具。和 Gemini 讨论需求,让 ChatGPT 充当架构师和 Code Reviewer ,而 Claude Code 负责具体的执行与 Bug 修复。 • 先沟通再编码: 最佳流程是先与 AI 深度沟通需求,明确提示词( Prompt )后再让 AI 生成代码,这比直接盲目编码效率更高。 • 利用现成模板: 对于 0 基础或求快的开发者,直接使用成熟的 SaaS 模板(如集成了 Auth 和 Stripe 的模板)可以避开登录、订阅等复杂技术坑位,这个也是蛮重要的,不要在不重要的事情上,花费过多的时间。 四、想请教大家几个问题: 目前产品刚上线,希望能得到大家的建议,因此想请教大家几个问题: 1 、这个产品你们体验后,有没有觉得不好,想吐槽的地方? 2 、如果你们也是创作者,你们对 AI 视频工具最看重什么?(价格、质量、速度、还是功能丰富度?) 3 、目前产品还有哪些功能你们觉得是刚需但缺失的? 4 、有没有独立开发者朋友,想请教一下产品推广的经验? 我的网站地址是: https://popcornai.art/ 欢迎试用: PopcornAI Art ,有任何问题都可以在这里留言,我会认真回复每一条建议。 谢谢! 开发 工具 功能 24 条回复 • 2026-02-19 11:04:56 +08:00 1 kulove 17 小时 20 分钟前 via Android 可以加一些二级页面做一下 seo 然后首页的视频显示效果之类的也可以优化下 看着体验不是太好 2 catwalk 17 小时 18 分钟前 开发过程没毛病,但是 AI 生成视频/图片这块,应用端没多大机会和容易被颠覆,辛辛苦苦做出来很容易被大模型一个功能就颠覆了,所以很可能白干。 另外个人非常反感 AI 生成的视频和图片。特别是视频,一眼假,图片如果是高质量还可以接受。换个角度思考,你个人愿意一直看 AI 视频吗 3 humbass 17 小时 2 分钟前 via Android 没啥机会,背后完全依赖别人的引擎,仅仅是调用 API 而已。都是别人吃剩下的的。 4 imxiaolong 16 小时 56 分钟前 从外行的角度看,我觉得做得已经非常好了 5 nbndco 16 小时 19 分钟前 唯一需要解答的问题是:现在的大公司的 ai 会员需要 ai 的人人手一个,各种生成量大管饱,为什么要花更贵的钱来用你的壳?而且这个壳还如此简陋? 至于推广,你说白了就是卖 api ,这个壳可以完全忽略不计,那么只要你比别人便宜,你不用推广流量也源源而来,甚至直接转卖。 6 lagrange7 15 小时 59 分钟前 所以这是什么模型做的?看完了没发现你讲 现在大家不是都在用可灵和 sd 吗 7 kafkaG OP 15 小时 41 分钟前 @ kulove 感谢你的建议,我记下了并改善。 8 kafkaG OP 15 小时 37 分钟前 @ catwalk 谢谢兄弟的建议,我会努力持续的优化功能,让产品能不那么容易被替代,同时也能满足一部分人的需求。 9 kafkaG OP 15 小时 37 分钟前 @ humbass 谢谢兄弟建议。 10 kafkaG OP 15 小时 36 分钟前 @ imxiaolong 谢谢兄弟,我会继续努力的。

Source: V2EX | Original Link

那些工作中浏览器要打开 50+标签的人都是啥想法

那些工作中浏览器要打开 50+标签的人都是啥想法 – V2EX 首页 注册 登录 V2EX = way to explore V2EX 是一个关于分享和探索的地方 现在注册 已注册用户请 登录 V2EX › 职场话题 那些工作中浏览器要打开 50+标签的人都是啥想法 bugmakerxs · 14 小时 54 分钟前 · 1760 次点击 非引战,单纯好奇,有些窗口挂在那里可能一辈子都不会再看一眼了,那为啥还开着。。 本人习惯是超过 20 个就要清理一波,影响也不是很大 我最佩服的是一个同事,开三四个 chrome 窗口,每个窗口的标签都摆满了,但是他还是能在沟通会议上精准找到自己要展示的标签页。。脑容量不够的我无法想象他是怎么做到的 标签 窗口 习惯 35 条回复 • 2026-02-19 12:34:23 +08:00 1 xiaojie668329 14 小时 23 分钟前 我的日常应该 2 、300 个吧( ´▽`) 2 bugmakerxs OP 14 小时 18 分钟前 @ xiaojie668329 ……活跃标签有多少? 3 Lyet813 14 小时 12 分钟前 via Android pc 常开常关,Android 的偶尔瞄一眼顺手关 4 RoccoShi 14 小时 5 分钟前 根据局部性原理,虽然开很多,但一般就是在最后几个来回点。。 就是没时间清罢了。 5 kdwnil 14 小时 0 分钟前 做同一件事用到的标签会放到同一个窗口,如果同时要处理好几件事就会出现多个窗口,所以日常开着几十个很正常,反正 chrome 会自己回收一段时间没点开的标签,事情做完以后用鼠标中键关掉一大堆标签页很爽的 6 uqf0663 13 小时 58 分钟前 当年还没有微信还很流行 PC 版 QQ 的时代,我有个同学可以同时上 5 个 QQ ,每个 QQ 开着几十个群聊窗口,任务栏点开来每个折叠窗口都长到需要滚动条,就这样,他都能同时好多个群在参与聊天,话题都不一样,还能不聊串了,我当时也是很震惊,我同时两三个群聊我都聊不过来。 7 JoeDH 13 小时 55 分钟前 有时候摸鱼看见有趣的技术贴或者什么方案,就等着有空时看,久而久之就攒下来了。 一般我都是逐个看过之后才会删 8 xiaojie668329 13 小时 34 分钟前 @ bugmakerxs 活跃的有几十个吧。有些打开了没时间又不想关就先放着 9 bclerdx 13 小时 22 分钟前 @ RoccoShi 清理会没时间,不信?不就是关闭 X 号而已,有 10 分钟足够关闭大部分 X 号了。 10 bugmakerxs OP 13 小时 16 分钟前 @ RoccoShi 密密麻麻只能通过图标分辨了吧… 11 donaldturinglee 13 小时 15 分钟前 via iPhone 有时候是挂着怕忘了,很多时候关了就再也找不到了 12 stinkytofux 13 小时 7 分钟前 同样好奇他们是怎么用浏览器的, 我只会开个位数的标签页, 开多了没意义, 记不住. 13 cwcc 12 小时 54 分钟前 突然想到个 off topic ,访问国外网站往往标签都不多,国内用_blank 的居多。所以我发现只要是国内的网站往往都会数量 x3 ,国外的会少一点。 除此之外都是根据 favicon 来辨认,比较容易找到,脑子会缓存一部分固定位置。 14 wniming 12 小时 53 分钟前 via Android 我也理解不了,我每个窗口最多打开 8 个 tab ,同时做多件事就开多个窗口 15 follower 12 小时 49 分钟前 装个 one tab 插件吧,用很多年了 16 z1645444 12 小时 32 分钟前 大部分是临时性的,跟项目走的。 实际工作中可能会存在更多状态的混合并行:一个迭代进入测试,过没几天又进入另一个迭代的开发,以及时不时会冒出的待办,同时叠加起来。 将在线文档、接口文档、本期迭代操作页面、上游监控等等各种相关的 tabs 整合出一组 tab group 方便查看,group name 通常根据迭代名或者待办来取,没有正式宣布结束前不会删除这个 tab group 。留存的 tab group 在非节假日前一般是 3 个 4 个,展开以后大概就是 60 上下。 17 Kaiyuan 11 小时 55 分钟前 我堂妹,做外贸的,她日常各种产品页,管理页,沟通页等等的,不说一定 50 个,长期三十来个以上的。 18 haruhi 10 小时 50 分钟前 有些工作开多个标签页会更加高效。比如,多业务团队下的多个项目同步协作,基本上都是多标签页查看、校对、更新文档。 我是长时间 30 个标签页左右(这 30 个网页会随着项目结束定期换成其他的,并不是固定的),忙的时候会是 50 多个。如果我看完一个文档就关闭,之后想重新找回来,无论是重新去对应的地方找文档,还是在关闭前就记录到一个总文档里,都太麻烦了。 19 midasplus 8 小时 57 分钟前 via Android 因为关闭也需要成本呀,开在那也不影响我,就开着了 20 trn4 8 小时 36 分钟前 tab 不超过 10 个,需要以后看的都是 url 存下来 21 fyq 8 小时 9 分钟前 真诚发问,你们的网页不会有 session 过期的情况吗?我很多网页放着不管,几小时后就需要重新登陆了,那上百个网页等你想起来的时候,重新登录(有些甚至需要 MFA )的繁琐,还不如放个快捷方式在文件夹里。 22 lonely701 5 小时 57 分钟前 做一点简单的 research 就要开 40-50 个,所以很需要 vertical tab 这种功能 23 Planarians 3 小时 55 分钟前 via iPhone 手机上可能至少有几百个 每年可能才回去重新看一遍挺有意思的 电脑上一般都整理到 obsidian 或者 onetab 里去了 24 michaelzxp 3 小时 53 分钟前 基本 100 个左右,分成工作组,个人兴趣爱好组,工具组 25 chenliangngng 3 小时 4 分钟前 在没有大模型之前就是这样,可以理解为大脑上下文最大 token 。有大模型之后基本不超过 15 个 26 june4 2 小时 45 分钟前 很多人不会用书签,就把标签页当书签用了 27 leegradyllljjjj 2 小时 27 分钟前 via iPhone 内存大,我 16g 多开 5 个就卡的不行 28 levelworm 2 小时 19 分钟前 via iPhone 习惯了就好,我就是五个窗口,常用的三个,另外两个最小化。 窗口一:各种沟通办公页面,Okta/Google 一套/AWS/Jira/Confluence 等等,加起来 20-30 个,常用的一半。 窗口二:真正的工作都在这里,比如 Databricks/BigQuery ,按照工单类型分成 3-5 组,一组是杂活,其他的是比较大的活。这里大概有 30-40 个,大多数都是常用。 窗口三:自己学习的东西都在这里,看兴趣每天变。比如昨天

Source: V2EX | Original Link

Anthropic officially bans using subscription auth for third party use

Legal and compliance – Claude Code Docs Skip to main content Claude Code Docs home page English Search… ⌘ K Ask AI Search… Navigation Resources Legal and compliance Getting started Build with Claude Code Deployment Administration Configuration Reference Resources Resources Legal and compliance On this page Legal agreements License Commercial agreements Compliance Healthcare compliance (BAA) Usage policy Acceptable use Authentication and credential use Security and trust Trust and safety Security vulnerability reporting ​ Legal agreements ​ License Your use of Claude Code is subject to: Commercial Terms – for Team, Enterprise, and Claude API users Consumer Terms of Service – for Free, Pro, and Max users ​ Commercial agreements Whether you’re using the Claude API directly (1P) or accessing it through AWS Bedrock or Google Vertex (3P), your existing commercial agreement will apply to Claude Code usage, unless we’ve mutually agreed otherwise. ​ Compliance ​ Healthcare compliance (BAA) If a customer has a Business Associate Agreement (BAA) with us, and wants to use Claude Code, the BAA will automatically extend to cover Claude Code if the customer has executed a BAA and has Zero Data Retention (ZDR) activated. The BAA will be applicable to that customer’s API traffic flowing through Claude Code. ​ Usage policy ​ Acceptable use Claude Code usage is subject to the Anthropic Usage Policy . Advertised usage limits for Pro and Max plans assume ordinary, individual usage of Claude Code and the Agent SDK. ​ Authentication and credential use Claude Code authenticates with Anthropic’s servers using OAuth tokens or API keys. These authentication methods serve different purposes: OAuth authentication (used with Free, Pro, and Max plans) is intended exclusively for Claude Code and Claude.ai. Using OAuth tokens obtained through Claude Free, Pro, or Max accounts in any other product, tool, or service — including the Agent SDK — is not permitted and constitutes a violation of the Consumer Terms of Service . Developers building products or services that interact with Claude’s capabilities, including those using the Agent SDK , should use API key authentication through Claude Console or a supported cloud provider. Anthropic does not permit third-party developers to offer Claude.ai login or to route requests through Free, Pro, or Max plan credentials on behalf of their users. Anthropic reserves the right to take measures to enforce these restrictions and may do so without prior notice. For questions about permitted authentication methods for your use case, please contact sales . ​ Security and trust ​ Trust and safety You can find more information in the Anthropic Trust Center and Transparency Hub . ​ Security vulnerability reporting Anthropic manages our security program through HackerOne. Use this form to report vulnerabilities . © Anthropic PBC. All rights reserved. Use is subject to applicable Anthropic Terms of Service. Was this page helpful? Yes No ⌘ I

Source: Hacker News | Original Link

How to Choose Between Hindley-Milner and Bidirectional Typing

How to Choose Between Hindley-Milner and Bidirectional Typing · Thunderseethe’s Devlog This question is common enough you’ve probably heard it posed countless times: “Should my new programming language use a Hindley-Milner (HM) type system or a Bidirectional (Bidir) one?” What’s that? I need to understand friends don’t just bring up type inference in casual conversation? OK, ouch, fair enough. But…whatever. This is my blog. We’re doing it anyway! I don’t know what you expected when you clicked on a programming languages blog. Picking a type system is a real barrier for would be language developers. Eyes full of trepidation as they navigate the labyrinth of nuanced choice that goes into everything a programming language asks of them. Which type system to choose is just another quandary in the quagmire as they trudge towards a working prototype. Its understandable they’d want to make a quick decision and return to marching. But this is the wrong question to ask. The question presumes that HM and Bidir are two ends of a spectrum. On one end you have HM with type variables and unification and all that jazz. On the other end you have bidirectional typing where annotations decide your types and little inference is involved. This spectrum, however, is a false dichotomy. What folks should actually be asking is “Does my language need generics?”. This question frames the problem around what your language needs, rather than an arbitrary choice between two algorithms of abstract tradeoffs. Perhaps more importantly, it determines if you’ll need unification or not. Generics, generally, require a type system that supports unification. Unification is the process of assigning and solving type variables. If you’ve ever seen Rust infer a type like Vec , that’s unification chugging along. Note We don’t have time today. But if you’re interested in how unification works, I have a tutorial about it . When facing down designing a type system, knowing if you need unification or not decides a lot for you. Unification sits center stage in Hindley-Milner. When you pick HM you pick unification. The story is more interesting for bidirectional typing. If you look to the literature, you’ll find plenty of example of bidirectional typing without a unification in sight. By introducing annotations at key locations, you can type check sophisticated programs with no type variables. A key insight of bidirectional type is how much you can do without unification. And don’t get me wrong; it is cool how much it can do. But this leads to the incorrect perception that bidir can’t or shouldn’t use unification. The opposite couldn’t be more true. Bidirectional typing supports all the same features as HM typing, and more, forming more of a superset relationship. Unification slots into bidirectional typing like a vim user slots into home row. This is because bidirectional typechecking is a superset of HM. Imagine we have some AST (in Rust): enum Ast { // some cases, probably } And we have a

Source: Hacker News | Original Link

Ladybird: Closing this as we are no longer pursuing Swift adoption

Swift 6.0 Blockers · Issue #933 · LadybirdBrowser/ladybird · GitHub Skip to content You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert LadybirdBrowser / ladybird Public Uh oh! There was an error while loading. Please reload this page . Notifications You must be signed in to change notification settings Fork 2.7k Star 58.6k Swift 6.0 Blockers #933 New issue Copy link New issue Copy link Closed Closed Swift 6.0 Blockers #933 Copy link Labels swift task list This issue contains a list of tasks. This issue contains a list of tasks. Description ADKaster opened on Aug 2, 2024 Issue body actions List of issues preventing moving forward on moving Swift 6.0 support out of an experimental state: Swift issues: Please backport d8352e93c1c8042d9166eab3d76d6c07ef585b6d swiftlang/llvm-project#8998 Details: Swift’s version of LLVM is missing the fix for [Clang] ICE in CheckPointerToMemberOperands passing decltype of lambda llvm/llvm-project#53815 . This means that any assertions build of llvm from the swift open source project cannot build our code. Snapshot builds are released with assertions on. Workaround: Build swift from source on Linux without llvm assertions, or use macOS. PR: 🍒 [Clang] [Sema] Handle placeholders in ‘.*’ expressions (#83103) swiftlang/llvm-project#9038 Fixed in Swift 6.0.0 release Interop: Compiler and C++ Bridging header disagree on ABI of Optional swiftlang/swift#75593 Details: It is not currently possible to return a swift optional of a small C++ type back to C++. The compiler and the generated bridging header disagree on how that is supposed to be done. Workaround: Don’t use Optional, use a return type that forces the C++ type to be heap allocated. Array is one alternative. Interop: Compiling with C++17 or higher on Ubuntu 22.04 fails with cyclic header dependencies in libstdc++ swiftlang/swift#75661 Details: Swift’s clang module map for libstdc++ contains cycles when is included. See https://forums.swift.org/t/swift-5-9-release-on-ubuntu-22-04-fails-to-build-std-module/67659 Workaround: Edit /lib/swift/linux/libstdcxx.h to comment out the #include line. PR: [cxx-interop] Disable c++ execution header with libstdcxx versions >= 11 swiftlang/swift#75662 (Just a workaround, not a fix) 6.0 Backport: 🍒 [cxx-interop] Disable c++ execution header with libstdcxx versions >= 11 swiftlang/swift#75971 Fixed in swiftlang/swift:main and release/6.0, but not in 6.0.0 or 6.0.1 Interop: Cannot return swift::Optional from C++ function swiftlang/swift#76024 Details: Returning binding types swift::Optional or swift::String from a C++ function is not supported Workaround: Return std:: types? Swift cannot import libstdc++-13 chrono header in C++23 mode swiftlang/swift#76809 Details: Swift 6.0 cannot import

Source: Hacker News | Original Link

27-year-old Apple iBooks can connect to Wi-Fi and download official updates

MacOS which officially supports 27 year old iBooks can still connect to a modern Wi-Fi network, and download updates from apple servers without any modifications, Apple is the opposite of planned obsolescence. : MacOS jump to content my subreddits edit subscriptions popular – all – users | AskReddit – pics – funny – movies – worldnews – news – todayilearned – nottheonion – explainlikeimfive – mildlyinteresting – DIY – videos – OldSchoolCool – TwoXChromosomes – tifu – Music – books – LifeProTips – Philippines – dataisbeautiful – aww – science – space – Showerthoughts – askscience – Jokes – Art – IAmA – Futurology – sports – UpliftingNews – food – nosleep – creepy – history – gifs – InternetIsBeautiful – GetMotivated – gadgets – announcements – WritingPrompts – philosophy – Documentaries – EarthPorn – photoshopbattles – listentothis – blog more » reddit.com MacOS comments Want to join? Log in or sign up in seconds. limit my search to r/MacOS use the following search parameters to narrow your results: subreddit: subreddit find submissions in “subreddit” author: username find submissions by “username” site: example.com find submissions from “example.com” url: text search for “text” in url selftext: text search for “text” in self post contents self:yes (or self:no) include (or exclude) self posts nsfw:yes (or nsfw:no) include (or exclude) results marked as NSFW e.g. subreddit:aww site:imgur.com dog see the search faq for details. advanced search: by author, subreddit… this post was submitted on 18 Feb 2026 709 points (89% upvoted) shortlink: Submit a new link Submit a new text post Get an ad-free experience with special benefits, and directly support Reddit. get reddit premium MacOS join leave Welcome to /r/MacOS ! Welcome to r/macOS , the community for all macOS news, rumors, and discussions. If you have a tech question, please check out /r/AppleHelp ! Rules Be Civil/Practice Reddiquette No posts that aren’t related to macOS No NSFW content No selling, trading, or asking to buy devices No Beta Posts No spam or piracy Only macOS App Store apps are allowed to be promoted No bots No political/religious content No reposts Be specific when requesting help /r/macOS uses a domain whitelist. Websites not on the whitelist will be removed automatically. You can view the whitelist here . Apple Subreddits /r/Apple /r/AppleMusic /r/AppleSwap /r/AppleTV /r/AppleWallet /r/AppleWatch /r/iOS /r/iOSBeta /r/iPhone /r/Mac /r/MacApps /r/MacOSBeta a community for 15 years MODERATORS message the mods 582 · 46 comments PSA: Bad Actors are increasingly impersonating indie Mac projects with malware. Here’s how to spot them. 57 · 18 comments New Rules for App Self Promotion 707 · 185 comments MacOS which officially supports 27 year old iBooks can still connect to a modern Wi-Fi network, and download updates from apple servers without any modifications, Apple is the opposite of planned obsolescence. 31 · 19 comments macOS tip: keep your Mac awake with caffeinate (no app neede

Source: Hacker News | Original Link