Atlas Computing

ix
Source: Atlas Computing

About

Source: Website

Sourcing owners for neglected AI risks

Because civilization is not prepared for rapid AI progress.

We identify unowned problems, map stakeholders, draft milestones, source early funders, and recruit an expert leader to take ownership.

  • There are predictable gaps on the path of AI progress.

  • Addressing these may require a set of skills no individual has.

  • However, many of those skills are only needed once.

  • We bring the single-use skills to help specialists build the orgs humanity needs.

How we work

We take a “problem-first” approach.

We’re not an incubator, venture studio, or fellowship, and we don’t try to be. We love meeting brilliant people with great ideas and we’ll happily route them to a better institutional home.

We lower the barrier to entry.

We make it easier for people with the right expertise to work on critical problems. We can help de-risk a career change and provide direction, context, introductions to stakeholders and supporters, and clear next steps.

We start with experts.

We ask “What neglected problems will societies need to solve if AI keeps getting more capable?” Then we identify stakeholders for each risk, interested funders, the shape of a solution, and the skills needed to prototype a solution.

Our output is focused teams.

We’re not a think tank; for us reports are a means, not the end. We take responsibility for problems until we can pass the torch to someone with the skills and resources to focus full-time on the problem.

We’re mapping all the gaps between today and a future of AI resilience.

Source: Website

Contact

Email: School

Web Links

Projects

Source: Other

Community Building

We’re advancing a new AI architecture that provides provable safety properties by constraining AI outputs with specifications. Advancing a new architecture requires increasing the number of people aware of and working on this architecture until it’s a ubiquitous mechanism for assuring safety.

Neglected catastrophic risks from AI

We’ve started a list of potential risks that would noticably and negatively affect most humans if they came to pass.

We consider a problem neglected if there isn’t someone full-time focused solving it.

Community mailing list on Guaranteed Safe AI

We started a public google group mailing list dedicated to discussing AI architectures designed to have provable safety properties.

You can see past conversations or join directly

Current and past events

See the events that we have and are looking forward to organizing or co-organizing!

If you’d like to organize something with us, please email us at hello@atlascomputing.org

Cyber

We think AI shouldn’t be a black box to users, but instead should output a formal specification of solution properties as a reviewable intermediary result. The first step is building tools that (1) help users generate specifications for software, (2) generate programs from those specifications, and (3) generate proofs that the programs satisfy the specifications. Our early experiments are focused on understanding and describing the state-of-the-art research on the intersection of AI and formal methods, laying the foundations to build these tools.

Specification IDE

We’re prototyping a tool to show users with no formal methods experience could understand a formal specification; the tool maps subsections of that spec to natural language description and annotates the comparison

The Opportunity for AI and FV

A nontechnical explanation of (1) why AI poses a cybersecurity risk, (2) the value and difficulties in deploying formal verification (FV), (3) how advances in language models could overcome these limits, and (4) next steps to advance the usage of formal verification.

Porting Libraries Between Coq and Lean with ChatGPT

At Atlas, Jason Gross developed a testbed to demonstrate that language models are sufficiently capable to convert from Coq proofs to Lean proofs by developing an verification tool to show that compiled Coq proofs and transpiled Lean proofs were equivalent to isomorphism.

AI Assisted FV Toolchain

Formally verifying software involves generating (a) the software itself, (b) a specification of how the software should behave, and (c) a mathematical proof that the software satisfies the specification.

Discuss

OnAir membership is required. The lead Moderator for the discussions is People Curators. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on the contents of this post.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar