Senior/Staff Software Engineer, Managed AI
Crusoe
Crusoe Energy is on a mission to unlock value in stranded energy resources through the power of computation.
Take a look at what we do! - https://www.youtube.com/watch?v=Rlt8k71Quqw
We aim to align the long term interests of the climate with the future of global computing infrastructure. As data centers consume an exponentially growing power footprint to deliver technology to all connected devices, we are inspired by making sure that the energy meeting that demand is sourced in an environmentally responsible fashion. Crusoe co-locates mobile data centers with stranded energy resources, like flare gas and underloaded renewables, to deliver low-cost, carbon-negative distributed computing solutions. Crusoe Cloud is a managed cloud services platform powered by stranded energy that enables climate-friendly innovation in computationally intensive fields including artificial intelligence, graphics rendering, and computational biology.
About the Role
As a Senior/Staff Software Engineer on the Managed AI team at Crusoe, you'll have a pivotal role in shaping the architecture and scalability of our next-generation AI inference platform. You will lead the design and implementation of core systems for our AI services, including resilient fault-tolerant queues, model catalogs, and scheduling mechanisms optimized for cost and performance. This role gives you the opportunity to build and scale infrastructure capable of handling millions of API requests per second across thousands of customers.
From day one, you'll own critical subsystems for managed AI inference, helping to serve large language models (LLMs) to a global audience. As part of a dynamic, fast-growing team, you’ll collaborate cross-functionally, influence the long-term vision of the platform, and contribute to cutting-edge AI technologies. This is a unique opportunity to build a high-performance AI product that will be central to Crusoe's business growth.
A Day In the Life
As a Senior/Staff Software Engineer in the Managed AI team, you’ll play a crucial role in building the infrastructure to serve artificial neural networks and in the near term, large language models (LLMs) at scale. You’ll own the design and implementation of key subsystems for resiliency and quality of service. You will build model catalogs, billing systems, dynamic pricing models, and have the opportunity of going deep into the model deployment stack for cost-optimized scheduling. Each day, you’ll collaborate with a small but growing team of engineers to build scalable cloud-based solutions that can handle millions of requests per second.
You’ll work closely with cross-functional teams, including product management and business strategy, to develop a customer-facing API that serves real-world AI models. Every day will present an opportunity to influence the long-term vision and architectural decisions, from the first lines of code to full-scale implementation. You’ll also be prototyping rapidly, optimizing performance on GPUs, and ensuring high availability as part of the MVP development. Whether it’s contributing to open-source AI frameworks or diving into low-level performance optimizations, your contributions will directly impact both the company’s growth and the product’s success.
You Will Thrive In This Role If You Have:
You have a strong background in distributed systems design and implementation, with proven experience in early-stage projects and tight deadlines.
You are passionate about building scalable AI infrastructure and have experience with cloud-based services that can handle millions of requests.
You enjoy problem-solving around performance optimizations, particularly when it comes to AI inference on GPU-based systems.
You have a proactive and collaborative approach, with the ability to work autonomously while engaging with a rapidly growing team.
You have strong communication skills, both written and verbal, and can translate complex technical challenges into understandable terms for cross-functional teams.
You’re excited about working in a fast-paced environment, contributing to a new product category, and having a tangible influence on the long-term vision of the AI platform.
You are passionate about open-source contributions and AI inference frameworks like VLLM, with a desire to push the boundaries of performance and scalability.
You are keen on customer-facing product development, with a desire to build user-friendly APIs that integrate real-world feedback for continuous improvement.
Preferred Qualifications
-
Must-Have:
Advanced degree in Computer Science, Engineering, or a related field.
Demonstrable experience in distributed systems design and implementation.
Proven track record of delivering early-stage projects under tight deadlines.
Expertise in using cloud-based services, such as, elastic compute, object storage, virtual private networks, managed database, etc
Experience in Generative AI (Large Language Models, Multimodal).
Experience with container runtimes (e.g., Kubernetes) and microservices architectures.
Experience using REST APIs and common communication protocols, such as gRPC.
Demonstrated experience in the software development cycle and familiarity with CI/CD tools.
-
Nice-to-Have:
Proficiency in Golang or Python for large-scale, production-level services.
Familiarity with AI infrastructure, including training, inference, and ETL pipelines.
Contributions to open-source AI projects such as VLLM or similar frameworks.
Performance optimizations on GPU systems and inference frameworks.
Growth Opportunities
Shape the foundation of a cutting-edge, customer-facing AI inference platform.
Become a technical leader in performance optimization and AI infrastructure.
Collaborate with partners like Intel and NVIDIA on pushing the limits of AI performance.
Contribute to open-source AI frameworks and gain visibility in the AI community.
Take on leadership roles as the team scales, with opportunities to mentor junior engineers and influence the product roadmap.
Benefits
Hybrid work schedule
Industry competitive pay
Restricted Stock Units in a fast growing, well-funded technology company
Health insurance package options that include HDHP and PPO, vision, and dental for you and your dependents
Employer contributions to HSA accounts
Paid Parental Leave
Paid life insurance, short-term and long-term disability
Teladoc
401(k) with a 100% match up to 4% of salary
Generous paid time off and holiday schedule
Cell phone reimbursement
Tuition reimbursement
Subscription to the Calm app
MetLife Legal
Company paid commuter benefit; $50 per pay period
Compensation Range
Compensation will be paid in the range of $183,000 - $250,000. Restricted Stock Units are included in all offers. Compensation to be determined by the applicants knowledge, education, and abilities, as well as internal equity and alignment with market data.
Crusoe Energy is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, disability, genetic information, pregnancy, citizenship, marital status, sex/gender, sexual preference/ orientation, gender identity, age, veteran status, national origin, or any other status protected by law or regulation.