Local weather tech startups are corporations that use expertise and innovation to handle the local weather disaster, with a major concentrate on both decreasing greenhouse fuel emissions or serving to society adapt to local weather change impacts. Their unifying mission is to create scalable options that speed up the transition to a sustainable, low-carbon future. Options to the local weather disaster are ever extra necessary as climate-driven excessive climate disasters enhance globally. In 2024, climate disasters caused more than $417B in damages globally, and there’s no slowing down in 2025 with LA wildfires that destroyed more than $135B within the first month of the yr alone. Local weather tech startups are on the forefront of constructing impactful options to the local weather disaster, and so they’re utilizing generative AI to construct as shortly as attainable.
On this publish, we present how local weather tech startups are creating basis fashions (FMs) that use intensive environmental datasets to sort out points resembling carbon seize, carbon-negative fuels, new supplies design for microplastics destruction, and ecosystem preservation. These specialised fashions require superior computational capabilities to course of and analyze huge quantities of knowledge successfully.
Amazon Net Companies (AWS) supplies the important compute infrastructure to help these endeavors, providing scalable and highly effective assets by Amazon SageMaker HyperPod. SageMaker HyperPod is a purpose-built infrastructure service that automates the administration of large-scale AI coaching clusters so builders can effectively construct and practice advanced fashions resembling giant language fashions (LLMs) by routinely dealing with cluster provisioning, monitoring, and fault tolerance throughout hundreds of GPUs. With SageMaker HyperPod, startups can practice advanced AI fashions on various environmental datasets, together with satellite tv for pc imagery and atmospheric measurements, with enhanced velocity and effectivity. This computational spine is significant for startups striving to create options that aren’t solely modern but in addition scalable and impactful.
The growing complexity of environmental information calls for strong information infrastructure and complicated mannequin architectures. Integrating multimodal information, using specialised consideration mechanisms for spatial-temporal information, and utilizing reinforcement studying are essential for constructing efficient climate-focused fashions. SageMaker HyperPod optimized GPU clustering and scalable assets assist startups save money and time whereas assembly superior technical necessities, which suggests they will concentrate on innovation. As local weather expertise calls for develop, these capabilities enable startups to develop transformative environmental options utilizing Amazon SageMaker HyperPod.
Tendencies amongst local weather tech startups constructing with generative AI
Local weather tech startups’ adoption of generative AI is evolving quickly. Beginning in early 2023, we noticed the primary wave of local weather tech startups adopting generative AI to optimize operations. For instance, startups resembling BrainBox AI and Pendulum used Amazon Bedrock and fine-tuned present LLMs on AWS Trainium utilizing Amazon SageMaker to extra quickly onboard new prospects by automated doc ingestion and information extraction. Halfway by 2023, we noticed the following wave of local weather tech startups constructing subtle clever assistants by fine-tuning present LLMs for particular use instances. For instance, NET2GRID used Amazon SageMaker for fine-tuning and deploying scale-based LLMs based mostly on Llama 7B to construct EnergyAI, an assistant that gives fast, customized responses to utility prospects’ energy-related questions.
Over the past 6 months, we’ve seen a flurry of local weather tech startups constructing FMs that tackle particular local weather and environmental challenges. In contrast to language-based fashions, these startups are constructing fashions based mostly on real-world information, like climate or geospatial earth information. Whereas LLMs resembling Anthropic’s Claude or Amazon Nova have tons of of billions of parameters, local weather tech startups are constructing smaller fashions with only a few billion parameters. This implies these fashions are quicker and cheaper to coach. We’re seeing some rising traits in use instances or local weather challenges that startups are addressing by constructing FMs. Listed here are the highest use instances, so as of recognition:
- Climate – Skilled on historic climate information, these fashions provide short-term and long-term, hyperaccurate, hyperlocal climate and local weather predictions, some specializing in particular climate parts like wind, warmth, or solar.
- Sustainable materials discovery – Skilled on scientific information, these fashions invent new sustainable materials that clear up particular issues, like extra environment friendly direct air seize sorbents to cut back the price of carbon removing or molecules to destroy microplastics from the surroundings.
- Pure ecosystems – Skilled on a mixture of information from satellites, lidar, and on-the floor sensors, these fashions provide insights into pure ecosystems, biodiversity, and wildfire predictions.
- Geological modeling – Skilled on geological information, these fashions assist decide the perfect places for geothermal or mining operations to cut back waste and get monetary savings.
To supply a extra concrete have a look at these traits, the next is a deep dive into how local weather tech startups are constructing FMs on AWS.
Orbital Supplies: Basis fashions for sustainable materials discovery
Orbital Materials has constructed a proprietary AI platform to design, synthesize, and take a look at new sustainable supplies. Creating new superior supplies has historically been a gradual strategy of trial and error within the lab. Orbital replaces this with generative AI design, radically dashing up supplies discovery and new expertise commercialization. They’ve launched a generative AI mannequin referred to as “Orb” that implies new materials design, which the group then checks and perfects within the lab.
Orb is a diffusion mannequin that Orbital Supplies educated from scratch utilizing SageMaker HyperPod. The primary product the startup designed with Orb is a sorbent for carbon seize in direct air seize services. Since establishing its lab within the first quarter of 2024, Orbital has achieved a tenfold enchancment in its materials’s efficiency utilizing its AI platform—an order of magnitude quicker than conventional improvement and breaking new floor in carbon removing efficacy. By enhancing the efficiency of the supplies, the corporate can assist drive down the prices of carbon removing, which might allow fast scale-up. They selected to make use of SageMaker HyperPod as a result of they “just like the one-stop store for management and monitoring,” defined Jonathan Godwin, CEO of Orbital Materials. Orbital was in a position to cut back their whole price of possession (TCO) for his or her GPU cluster with Amazon SageMaker HyperPod deep well being checks for stress testing their GPU cases to swap out defective nodes. Furthermore, Orbital can use SageMaker HyperPod to routinely swap out failing nodes and restart mannequin coaching from the final saved checkpoint, releasing up time for the Orbital Supplies group. The SageMaker HyperPod monitoring agent frequently displays and detects potential points, together with reminiscence exhaustion, disk failures, GPU anomalies, kernel deadlocks, container runtime points, and out-of-memory (OOM) crashes. Primarily based on the underlying challenge the monitoring agent both replaces or reboots the node.
With the launch of SageMaker HyperPod on Amazon Elastic Kubernetes Service (Amazon EKS), Orbital can arrange a unified management aircraft consisting of each CPU-based workloads and GPU-accelerated duties throughout the similar Kubernetes cluster. This architectural method eliminates the standard complexity of managing separate clusters for various compute assets, considerably decreasing operational overhead. Orbital can even monitor the well being standing of SageMaker HyperPod nodes by Amazon CloudWatch Container Insights with enhanced observability for Amazon EKS. Amazon CloudWatch Container Insights collects, aggregates, and summarizes metrics and logs from containerized functions and microservices, offering detailed insights into efficiency, well being, and standing metrics for CPU, GPU, Trainium, or Elastic Cloth Adapter (EFA) and file system as much as the container degree.
AWS and Orbital Supplies have established a deep partnership that permits fly-wheel development. The businesses have entered a multiyear partnership, during which Orbital Materials builds its FMs with SageMaker HyperPod and different AWS providers. In return, Orbital Supplies is utilizing AI to develop new information middle decarbonization and effectivity applied sciences. To additional spin the fly-wheel, Orbital will probably be making its market-leading open supply AI mannequin for simulating superior supplies, Orb, usually accessible for AWS prospects by utilizing Amazon SageMaker JumpStart and AWS Market. This marks the primary AI-for-materials mannequin to be on AWS platforms. With Orb, AWS prospects engaged on superior supplies and applied sciences resembling semiconductors, batteries, and electronics can entry market-leading accelerated analysis and improvement (R&D) inside a safe and unified cloud surroundings.
The architectural benefits of SageMaker HyperPod on Amazon EKS are demonstrated within the following diagram. The diagram illustrates how Orbital can set up a unified management aircraft that manages each CPU-based workloads and GPU-accelerated duties inside a single Kubernetes cluster. This streamlined structure eliminates the standard complexity of managing separate clusters for various compute assets, offering a extra environment friendly and built-in method to useful resource administration. The visualization exhibits how this consolidated infrastructure permits Orbital to seamlessly orchestrate their various computational wants by a single management interface.
Hum.AI: Basis fashions for earth commentary
Hum.AI is constructing generative AI FMs that present common intelligence of the pure world. Prospects can use the platform to trace and predict ecosystems and biodiversity to know enterprise affect and higher defend the surroundings. For instance, they work with coastal communities who use the platform and insights to revive coastal ecosystems and enhance biodiversity.
Hum.AI’s basis mannequin appears at pure world information and learns to symbolize it visually. They’re coaching on 50 years of historic information collected by satellites, which quantities to hundreds of petabytes of knowledge. To accommodate processing this huge dataset, they selected SageMaker HyperPod for its scalable infrastructure. By means of their modern mannequin structure, the corporate achieved the power to see underwater from house for the very first time, overcoming the historic challenges posed by water reflections
Hum.AI’s FM structure employs a variational autoencoder (VAE) and generative adversarial community (GAN) hybrid design, particularly optimized for satellite tv for pc imagery evaluation. It’s an encoder-decoder mannequin, the place the encoder transforms satellite tv for pc information right into a realized latent house, whereas the decoder reconstructs the imagery (after being processed within the latent house), sustaining consistency throughout totally different satellite tv for pc sources. The discriminator community supplies each adversarial coaching alerts and realized feature-wise reconstruction metrics. This method helps protect necessary ecosystem particulars that might in any other case be misplaced with conventional pixel-based comparisons, notably for underwater environments, the place water reflections usually intrude with visibility.
Utilizing SageMaker HyperPod to coach such a posh mannequin permits Hum.AI to effectively course of their personally curated SeeFar dataset by distributed coaching throughout a number of GPU-based cases. The mannequin concurrently optimizes each VAE and GAN targets throughout GPUs. This, paired with the SageMaker HyperPod auto-resume function that routinely resumes a coaching run from the newest checkpoint, supplies coaching continuity, even by node failures.
Hum.AI additionally used the SageMaker HyperPod out-of-the-box complete observability options by Amazon Managed Service for Prometheus and Amazon Managed Service for Grafana for metric monitoring. For his or her distributed coaching wants, they used dashboards to observe cluster efficiency, GPU metrics, community visitors, and storage operations. This intensive monitoring infrastructure enabled Hum.AI to optimize their coaching course of and keep excessive useful resource utilization all through their mannequin improvement.
“Our choice to make use of SageMaker HyperPod was easy; it was the one service on the market the place you may proceed coaching by failure. We had been in a position to practice bigger fashions quicker by making the most of the large-scale clusters and redundancy provided by SageMaker HyperPod. We had been in a position to execute experiments quicker and iterate fashions at speeds that had been unimaginable previous to SageMaker HyperPod. SageMaker HyperPod took all the fear out of large-scale coaching failures. They’ve constructed the infrastructure to sizzling swap GPUs if something goes fallacious, and it saves hundreds in misplaced progress between checkpoints. The SageMaker HyperPod group personally helped us arrange and execute giant coaching quickly and simply.”
– Kelly Zheng, CEO of Hum.AI.
Hum.AI’s modern method to mannequin coaching is illustrated within the following determine. The diagram showcases how their mannequin concurrently optimizes each VAE and GAN targets throughout a number of GPUs. This distributed coaching technique is complemented by the SageMaker HyperPod auto-resume function, which routinely restarts coaching runs from the newest checkpoint. Collectively, these capabilities present continuous and environment friendly coaching, even within the face of potential node failures. The picture supplies a visible illustration of this strong coaching course of, highlighting the seamless integration between Hum.AI’s mannequin structure and SageMaker HyperPod infrastructure help.

The way to save money and time constructing with Amazon SageMaker HyperPod
Amazon SageMaker HyperPod removes the undifferentiated heavy lifting for local weather tech startups constructing FMs, saving them money and time. For extra info on how SageMaker HyperPod’s resiliency helps save prices whereas coaching, take a look at Scale back ML coaching prices with Amazon SageMaker HyperPod.
At its core is deep infrastructure management optimized for processing advanced environmental information, that includes safe entry to Amazon Elastic Compute Cloud (Amazon EC2) cases and seamless integration with orchestration instruments resembling Slurm and Amazon EKS. This infrastructure excels at dealing with multimodal environmental inputs, from satellite tv for pc imagery to sensor community information, by distributed coaching throughout hundreds of accelerators.
The clever useful resource administration accessible in SageMaker HyperPod is especially beneficial for local weather modeling, routinely governing job priorities and useful resource allocation whereas decreasing operational overhead by as much as 40%. This effectivity is essential for local weather tech startups processing huge environmental datasets as a result of the system maintains progress by checkpointing whereas ensuring that vital local weather modeling workloads obtain crucial assets.
For local weather tech innovators, the SageMaker HyperPod library of over 30 curated mannequin training recipes accelerates improvement, permitting groups to start coaching environmental fashions in minutes fairly than weeks. The platform’s integration with Amazon EKS supplies strong fault tolerance and excessive availability, important for sustaining continuous environmental monitoring and evaluation.
SageMaker HyperPod versatile coaching plans are notably useful for local weather tech initiatives, permitting organizations to specify completion dates and useful resource necessities whereas routinely optimizing capability for advanced environmental information processing. The system’s capability to counsel different plans supplies optimum useful resource utilization for computationally intensive local weather modeling duties.With help for next-generation AI accelerators such because the AWS Trainium chips and complete monitoring instruments, SageMaker HyperPod supplies local weather tech startups with a sustainable and environment friendly basis for creating subtle environmental options. This infrastructure permits organizations to concentrate on their core mission of addressing local weather challenges whereas sustaining operational effectivity and environmental duty.
Practices for sustainable computing
Local weather tech corporations are particularly conscious of the significance of sustainable computing practices. One key method is the meticulous monitoring and optimization of power consumption throughout computational processes. By adopting environment friendly coaching methods, resembling decreasing the variety of pointless coaching iterations and using energy-efficient algorithms, startups can considerably decrease their carbon footprint.
Moreover, the mixing of renewable power sources to energy information facilities performs a vital position in minimizing environmental affect. AWS is set to make the cloud the cleanest and essentially the most energy-efficient strategy to run all our prospects’ infrastructure and enterprise. We’ve made important progress over time. For instance, Amazon is the largest corporate purchaser of renewable energy on the earth, yearly since 2020. We’ve achieved our renewable energy goal to match all of the electrical energy consumed throughout our operations—together with our information facilities—with 100% renewable power, and we did this 7 years forward of our authentic 2030 timeline.
Firms are additionally turning to carbon-aware computing rules, which contain scheduling computational duties to coincide with intervals of low carbon depth on the grid. This observe implies that the power used for computing has a decrease environmental affect. Implementing these methods not solely aligns with broader sustainability objectives but in addition promotes price effectivity and useful resource conservation. Because the demand for superior computational capabilities grows, local weather tech startups have gotten vigilant of their dedication to sustainable practices in order that their improvements contribute positively to each technological progress and environmental stewardship.
Conclusion
Amazon SageMaker HyperPod is rising as a vital instrument for local weather tech startups of their quest to develop modern options to urgent environmental challenges. By offering scalable, environment friendly, and cost-effective infrastructure for coaching advanced multimodal and multi- mannequin architectures, SageMaker HyperPod permits these corporations to course of huge quantities of environmental information and create subtle predictive fashions. From Orbital Supplies’ sustainable materials discovery to Hum.AI’s superior earth commentary capabilities, SageMaker HyperPod is powering breakthroughs that had been beforehand out of attain. As local weather change continues to pose pressing world challenges, SageMaker HyperPod automated administration of large-scale AI coaching clusters, coupled with its fault-tolerance and cost-optimization options, permits local weather tech innovators to concentrate on their core mission fairly than infrastructure administration. Through the use of SageMaker HyperPod, local weather tech startups will not be solely constructing extra environment friendly fashions—they’re accelerating the event of highly effective new instruments in our collective effort to handle the worldwide local weather disaster.
Concerning the authors
Ilan Gleiser is a Principal GenAI Specialist at Amazon Net Companies (AWS) on the WWSO Frameworks group, specializing in creating scalable synthetic common intelligence architectures and optimizing basis mannequin coaching and inference. With a wealthy background in AI and machine studying, Ilan has printed over 30 weblog posts and delivered greater than 100 prototypes globally over the past 5 years. Ilan holds a grasp’s diploma in mathematical economics.
Lisbeth Kaufman is the Head of Local weather Tech BD, Startups and Enterprise Capital at Amazon Net Companies (AWS). Her mission is to assist the perfect local weather tech startups succeed and reverse the worldwide local weather disaster. Her group has technical assets, go-to-market help, and connections to assist local weather tech startups overcome obstacles and scale. Lisbeth labored on local weather coverage as an power/surroundings/agriculture coverage advisor within the U.S. Senate. She has a BA from Yale and an MBA from NYU Stern, the place she was a Dean’s Scholar. Lisbeth helps local weather tech founders with product, development, fundraising, and making strategic connections to groups at AWS and Amazon.
Aman Shanbhag is an Affiliate Specialist Options Architect on the ML Frameworks group at Amazon Net Companies (AWS), the place he helps prospects and companions with deploying ML coaching and inference options at scale. Earlier than becoming a member of AWS, Aman graduated from Rice College with levels in laptop science, arithmetic, and entrepreneurship.
Rohit Talluri is a Generative AI GTM Specialist at Amazon Net Companies (AWS). He’s partnering with prime generative AI mannequin builders, strategic prospects, key AI/ML companions, and AWS Service Groups to allow the following technology of synthetic intelligence, machine studying, and accelerated computing on AWS. He was beforehand an Enterprise Options Architect and the World Options Lead for AWS Mergers & Acquisitions Advisory.
Ankit Anand is a Senior Basis Fashions Go-To-Market (GTM) Specialist at AWS. He companions with prime generative AI mannequin builders, strategic prospects, and AWS Service Groups to allow the following technology of AI/ML workloads on AWS. Ankit’s expertise contains product administration experience throughout the monetary providers business for high-frequency/low-latency buying and selling and enterprise improvement for Amazon Alexa.

