Bloody Sunshine

Just too damn hot

WR 20251107

题图:这周去试驾了新 ES8,很大很显贵。

这两天窜稀有点凶,更新晚了点,一会儿继续去蹲🚽。。。

【苹果】iOS 26.1 lets you tweak Liquid Glass, and it’s out now

Apple has just released iOS 26.1, which includes a new transparency toggle for Liquid Glass, expanded language features, and new controls for the Apple Music and Camera apps. The toggle helps address some of the legibility issues introduced in iOS 26 by allowing iPhone users to tone down the glassy design for buttons, tabs, and other navigational elements.

我猜到了会有个关闭部分效果的设置,但是没想到如此直白和迅速。按照以往如果是少数人的问题,应该在辅助选项里面,说明用户反对声音很大。苹果在搞这套设计的时候,前期的用户研究非常不够啊 😂

【AI】New prompt injection papers: Agents Rule of Two and The Attacker Moves Second

At a high level, the Agents Rule of Two states that until robustness research allows us to reliably detect and refuse prompt injection, agents must satisfy no more than two of the following three properties within a session to avoid the highest impact consequences of prompt injection. [A] An agent can process untrustworthy inputs [B] An agent can have access to sensitive systems or private data [C] An agent can change state or communicate externally

It’s still possible that all three properties are necessary to carry out a request. If an agent requires all three without starting a new session (i.e., with a fresh context window), then the agent should not be permitted to operate autonomously and at a minimum requires supervision — via human-in-the-loop approval or another reliable means of validation.

提示词注入攻击(Prompt Injection)目前并没有绝对有效的对应方法,Meta 提供的这个框架有很强的指导意义。具体到个人使用,如果用 AI 辅助浏览器,已经天然处于不可信输入 + 外部访问权限了,所以绝对不要让 AI 浏览器访问隐私数据或者系统权限,风险极高。

【车】Model S 事故致 5 人死亡,特斯拉断电难开车门被起诉

根据鲍尔夫妇的四名子女在上周五提交的起诉书,这起悲剧的根源在于 Model S 的锂离子电池组导致电子门锁系统失灵。诉状指出,基于此前的起火事故,特斯拉早已知晓该安全隐患,却仍“有意识地背离已知且可行的安全做法”。 鲍尔夫妇的子女在诉状中指出,像米歇尔这样的 Model S 后排乘客在事故后尤其脆弱:她们必须掀开地毯才能找到用于逃生的机械拉手,这一设计并不直观。诉状披露,一名附近居民在拨打 911 时称,曾听到车辆内传出呼救声。

国内很多新能源车都有类似问题,车内机械解锁没有统一规范,导致乘客没办法第一时间打开车门。而特斯拉这个机械把手在地毯下面这个确实走太远了,车本身的可靠性并不支持把机械结构藏起来的设计。

OpenAI says ChatGPT’s behavior “remains unchanged” after reports across social media falsely claimed that new updates to its usage policy prevent the chatbot from offering legal and medical advice. Karan Singhal, OpenAI’s head of health AI, writes on X that the claims are “not true.”

我前几天也被骗了,还以为是 ChatGPT 不再提供医疗和法律建议了,实际上只是免责,不影响一般用户使用。但是这条规则也可能在某些时候收紧来保障美国服务业的就业。

其他一些乱七八糟的