r/MachineLearning · 7h ago · 6 · research benchmark

An experiment testing frontier multimodal models' ability to appraise fine art from vision alone, revealing a gap between visual recognition and commitment to vision-based decisions. The analysis compares image-only vs. image+metadata approaches across GPT-4o, Claude 3.5 Sonnet, Gemini 3.1 Pro, and others, with implications for understanding multimodal model behavior and visual grounding.

OpenAI Blog · 8h ago · 7 · tool agent workflow

Codex app gains computer use capability, in-app browsing, image generation, and memory features that enable more autonomous agent behaviors for developers. The plugin system and memory persistence could streamline repetitive coding workflows and integrate with existing development tools.

Latent Space · 11h ago · 6 · agent workflow deployment api update

Article discusses how AI is changing software development workflows, particularly the potential decline of pull requests and code reviews in favor of prompt-based contributions and agent-oriented development. Covers OpenAI's new Agents SDK with sandbox integrations (Modal, Cloudflare, e2b, Vercel) enabling stateless orchestration + stateful execution patterns, plus Cloudflare's agent tools—relevant for understanding emerging AI agent deployment architectures.

r/LocalLLaMA · 15h ago · 8 · new model open source research

HY-World 2.0 is an open-source multimodal world model that generates editable 3D assets (meshes/Gaussian Splatting) from text, images, or videos—a paradigm shift from video-only world models. The framework includes WorldMirror 2.0 for 3D reconstruction and supports interactive exploration, with all model weights and code being released for reproducibility.

r/MachineLearning · 15h ago · 8 · research open source fine tuning

An undergraduate researcher identifies and solves a critical optimization pathology in multi-timescale Actor-Critic architectures where temporal attention mechanisms exploit policy gradients ('Surrogate Objective Hacking') or collapse to short-horizon policies. The proposed solution decouples the Actor from multi-timescale Critic representations, forcing robust auxiliary learning while isolating policy updates to long-term advantages, demonstrated via a minimal PyTorch implementation.

r/MachineLearning · 16h ago · 8 · benchmark open source research api update

An engineer built an open-source benchmark that evaluates frontier LLMs (GPT-5.3, Claude Opus 4.6, KIMI K2) on political stance using a 2D compass across 98 structured questions, revealing critical insights: refusal behavior itself is a measurable political signal, opt-out options dramatically change model outputs (Claude flipped quadrants when given permission to decline), and different models show distinct censorship patterns (KIMI blocks geopolitical content via API errors, GPT opts out universally when allowed). The repo is directly runnable on any model with an API.

r/MachineLearning · 17h ago · 5 · tutorial research

A Reddit post asking for learning resources in AI for Materials Science and computational chemistry, mentioning a UChicago course on applied AI. While potentially useful for engineers exploring domain-specific AI applications, it's primarily a community question rather than technical content or a concrete tool/resource announcement.

Simon Willison · 18h ago · 7 · tool workflow tutorial

Simon Willison built a custom preview UI using Claude Artifacts to validate YAML news files and catch markdown/YAML errors before deployment. This demonstrates a practical workflow for using Claude's code generation capabilities to reduce friction in content management tasks, leveraging Claude's ability to analyze GitHub repositories directly in conversation.

Simon Willison · 19h ago · 5 · tool api update

Datasette alpha introduces modern CSRF security using browser headers instead of Django tokens, and adds RenameTableEvent for plugin compatibility when tables are renamed. While technically sound engineering practices, this is primarily a database tooling update with limited direct relevance to AI/ML workflows.

r/MachineLearning · 20h ago · 6 · research benchmark

A researcher shares reproducibility issues encountered when validating claims from 7 papers this year, finding 4 irreproducible results with 2 having unresolved GitHub issues. This highlights systemic problems in ML research quality and code availability that directly impact engineers evaluating and building on published work.

Simon Willison · 1d ago · 8 · new model api update tool

Google released Gemini 3.1 Flash TTS, a new text-to-speech model accessible via the standard Gemini API that supports prompt-based direction for audio generation, including accent and tone control. The model is demonstrated with practical examples showing how detailed prompts can generate contextual speech variations (e.g., regional accents), making it useful for developers building voice-enabled applications.

Simon Willison · 1d ago · 8 · new model tool api update

Google released Gemini 3.1 Flash TTS, a new text-to-speech model that appears to be a significant update to their multimodal capabilities. This is a practical addition to the Gemini ecosystem that software engineers building with AI should be aware of for voice synthesis workflows and applications.

DeepMind Blog · 1d ago · 8 · new model api update inference

Gemini 3.1 Flash TTS, Google's latest text-to-speech model, introduces granular audio tags for precise vocal control across 70+ languages with improved naturalness (Elo score 1,211 on benchmarks). Developers can now embed natural language commands directly in text to control style, pacing, and delivery, with all audio watermarked using SynthID, available in Google AI Studio, Vertex AI, and Google Vids.

r/MachineLearning · 1d ago · 7 · research prompt engineering agent

Technical analysis documenting five social engineering attacks against GPT-4, GPT-4o, and Claude 3.5 Sonnet, demonstrating alignment failures through psychological manipulation vectors (guilt, peer pressure, identity destabilization, etc.). The writeup argues these vulnerabilities stem from training data rather than mathematical exploits, reframing jailbreak research from software vulnerability to inherited social failure modes.

HuggingFace Blog · 1d ago · 7 · benchmark agent tool research

VAKRA is a new executable benchmark for evaluating AI agents on compositional reasoning across APIs and documents in enterprise-like environments, featuring 8,000+ locally-hosted APIs across 62 domains with real databases. It measures multi-step workflows (3-7 reasoning chains) and reveals significant performance gaps in current models, with detailed failure mode analysis included.

OpenAI Blog · 1d ago · 8 · tool agent api update deployment

OpenAI's Agents SDK now includes native sandbox execution and model-native harness features, enabling developers to build more secure and reliable long-running agents with safe file and tool access. This is a practical SDK update that directly impacts how software engineers implement agent-based workflows in production.

HuggingFace Blog · 1d ago · 7 · agent tool deployment

Holo3, a computer-use AI model, is now accessible via HoloTab, a Chrome extension that automates web tasks through natural language commands and visual demonstration-based routine recording. The extension enables agentic automation for repetitive workflows across any website without requiring technical setup, representing a practical application of vision models and action planning for browser-based task automation.

r/MachineLearning · 1d ago · 7 · fine tuning benchmark inference workflow

Engineer successfully implemented GRPO (reinforcement learning) fine-tuning for summarization using a 3-node MLX cluster with combined length penalties and quality rewards (ROUGE-L), achieving ~64 token avg rollouts. The work demonstrates practical techniques for controlling output length while maintaining quality using multi-axis LLM-as-a-Judge evaluation (faithfulness, coverage, conciseness, clarity), with next steps focused on isolating reward function impact and detecting reward gaming.

r/MachineLearning · 1d ago · 7 · benchmark research

Critical discussion of a research paper's evaluation methodology for SQL code generation in LLMs—the authors found that using natural language metrics instead of execution metrics results in ~20% false positives, raising concerns about paper validity and peer review standards at top-tier venues.