Mistral launches its first reasoning models: Magistral Small, on Hugging Face under an Apache 2.0 license, and Magistral Medium, in preview on Mistral's Le Chat
Former employees of OpenAI are asking the top law enforcement officers in California and Delaware to stop the company from shifting control of its artificial intelligence technology from a nonprofit charity to a for-profit business. OpenAI said in response that “any changes to our existing structure would be in service of ensuring the broader public can benefit from AI.” It said its for-profit will be a public benefit corporation, similar to other AI labs like Anthropic and tech billionaire Elon Musk’s xAI, except that OpenAI will still preserve a nonprofit arm.
system that exceeds the powers of the human brain, as the tech giant jockeys to stay competitive in the technology race, according to four people with knowledge of the company’s plans. start-up Scale AI, to join the new lab, the people said, and has been in talks to invest billions of dollars in his company as part of a deal that would also bring other Scale AI employees to the company.
This work lays the foundations for a rigorous ontological characterization of love, addressing its philosophical complexity and scientific relevance, with particular emphasis on psychology and sociology, as well as highlighting ways in which such characterization enhances relevant AI based applications. The position defended here is that love is best understood as a concatenation of passive sensations (e.g., emotional arousal) and active evaluative judgments (e.g., perceiving the beloved as valuable), in the interest of balancing the involuntary aspects of love with its rational accountability. To provide a structured foundation, the paper draws on Basic Formal Ontology (BFO) and other applied ontological methods to differentiate various senses of love. This work engages with objections to the understanding of love as concatenation, particularly concerning the relationship between sensation and judgment. A causal correlation model is defended, ensuring that the affective and cognitive components are linked. By offering a precise and scalable ontological account, this work lays the foundation for future interdisciplinary applications, making love a subject of formal inquiry in ontology engineering, artificial intelligence, and the sciences.
A critical challenge remains unresolved as generative AI systems are quickly implemented in various organizational settings. Despite significant advances in memory components such as RAG, vector stores, and LLM agents, these systems still have substantial memory limitations. Gen AI workflows rarely store or reflect on the full context in which decisions are made. This leads to repeated errors and a general lack of clarity. This paper introduces Contextual Memory Intelligence (CMI) as a new foundational paradigm for building intelligent systems. It repositions memory as an adaptive infrastructure necessary for longitudinal coherence, explainability, and responsible decision-making rather than passive data. Drawing on cognitive science, organizational theory, human-computer interaction, and AI governance, CMI formalizes the structured capture, inference, and regeneration of context as a fundamental system capability. The Insight Layer is presented in this paper to operationalize this vision. This modular architecture uses human-in-the-loop reflection, drift detection, and rationale preservation to incorporate contextual memory into systems. The paper argues that CMI allows systems to reason with data, history, judgment, and changing context, thereby addressing a foundational blind spot in current AI architectures and governance efforts. A framework for creating intelligent systems that are effective, reflective, auditable, and socially responsible is presented through CMI. This enhances human-AI collaboration, generative AI design, and the resilience of the institutions.
We introduce a novel learning and planning framework that replaces traditional reward-based optimisation with constructive logical inference. In our model, actions, transitions, and goals are represented as logical propositions, and decision-making proceeds by building constructive proofs under intuitionistic logic. This method ensures that state transitions and policies are accepted only when supported by verifiable preconditions -- eschewing probabilistic trial-and-error in favour of guaranteed logical validity. We implement a symbolic agent operating in a structured gridworld, where reaching a goal requires satisfying a chain of intermediate subgoals (e.g., collecting keys to open doors), each governed by logical constraints. Unlike conventional reinforcement learning agents, which require extensive exploration and suffer from unsafe or invalid transitions, our constructive agent builds a provably correct plan through goal chaining, condition tracking, and knowledge accumulation. Empirical comparison with Q-learning demonstrates that our method achieves perfect safety, interpretable behaviour, and efficient convergence with no invalid actions, highlighting its potential for safe planning, symbolic cognition, and trustworthy AI. This work presents a new direction for reinforcement learning grounded not in numeric optimisation, but in constructive logic and proof theory.
OpenAI
San Francisco, Singapore, Tokyo, Dublin, Ireland, London, Munich, Paris, New York City, Seattle, Washington DC
Google DeepMind AI
London, Mountain View, Bangalore, New York City, Tokyo, San Francisco, Zurich, Cambridge, Seattle