MongoDB Blog

Articles, announcements, news, updates and more

Best of Breed: commercetools on Building Composable Commerce on MongoDB

What’s behind the power of a modern, data-centric, composable commerce platform that solves for all of the consumer demands in an increasingly AI-driven ecommerce landscape? Just ask Michael Scholz, the VP of Product and Customer Marketing at commercetools . commercetools Composable Commerce is an industry-leading commerce platform used by leading household brands like Express Inc., Danone, Ulta Beauty , Salling Group , John Lewis and partners , and KMart , all of which are building best-in-class omnichannel shopping experiences. Modern, data-centric composable commerce Mr. Scholz took to the stage at MongoDB.local NYC to talk about how MongoDB is powering commercetools' Composable Commerce platform, and how together MongoDB and commercetools are addressing the challenge of growth in the retail industry. “Global retail sales are about to grow 56% to just a little over 8 trillion. The real question here is: are the retailers ready for what’s ahead of them?” he asked. It’s a difficult question for many of those in the retail industry to answer, whether they’re retailers building ecommerce stacks in house, or are software companies trying to build a packaged ecommerce solution. Here’s a deep insight into how commercetools have succeeded and why they chose MongoDB as their trusted advisor. commercetools started building on MongoDB from day one, as they saw the database and the Atlas fully managed service as a best of breed option. They’re not in the business of managing data, they want to focus on value add features for their product and company: “We don’t want to be the custodians of data, we want to focus on what is important to us, which is commerce,” Mr. Scholz said in New York. MongoDB Atlas allows commercetools to do that by offering a fully managed database as a service that is cloud-native SaaS, reducing the operational effort for commercetools of managing thousands of databases and providing a highly available and scalable service. Elastic scale is incredibly important in retail with peak events like Black Friday, Christmas and also unplanned traffic surges, for example, should an influencer spark demand for a product unexpectedly. The shared ability of MongoDB Atlas and commercetools’ ability to grow or shrink automatically in response to demand is key, making the system highly performant at scale and also cost effective during low traffic. commercetools are considered thought leaders in the software industry due to their thinking and sharing of architectural best practice. commercetools CEO Dirk Hoerig coined the term Headless , and commercetools are the co-founders of the MACH Alliance , which champions microservice, API-first, cloud native SaaS and headless architecture practices. MongoDB is an enabler member of the MACH Alliance; its global multi-cloud database enables the building of a MACH compliance architecture and promotes a lean and agile development environment. “APIs can and will forever be able to be consumed by any consumer device, front-end, or other application,” according to Dirk Hoerig, the CEO and co-founder of commercetools. In this set up, it’s vital that the backend is fast and dependable. With MongoDB Atlas’ high availability architecture, commercetools was capable of offering the unbelievable SLA of 100% uptime for three years in a row in Europe! “Nobody is going to believe that, but if we’re looking at that and looking at our GCP instance, we have 100% uptime for GCP and for MongoDB. In the U.S., we had 99.99%, and it’s really just a rounding error,” Mr. Scholz said at MongoDB.local NYC. “It’s all about high performance and low latency.” Dive further into the talk to learn about composable commerce, and why MongoDB is a match made in heaven for commercetools to unlock more growth possibilities and deliver outstanding shopping experiences while innovating fast to be ready for what’s next. What makes commercetools & MongoDB the perfect match MongoDB powers commercetools to deliver innovation at speed. Through this partnership and with MongoDB's robust technology, commercetools has built truly composable technology for businesses that require unlimited flexibility and infinite scale at lower costs. “We’ve realized the only constant is change,” Mr. Scholz said. “We don’t really know what’s about to happen. It’s all about how we can future-proof our software, and how we accomplish that with MongoDB.” Mr. Scholz illustrates why MongoDB is the perfect match for modern data-centric commerce in four key areas: Figure 1: commercetools chose MongoDB because it helps them iterate quickly, gives them unlimited scale, can run anywhere and to build better apps faster. Embracing the future: Integrating AI into retail Looking forward into the future of retail tech, the challenge to integrate AI into applications is fast approaching. Mr. Scholz highlighted how the ability to clean, migrate and enrich data through the use of MongoDB’s flexible document model helps them to build out customized AI experiences for customers. These are the building blocks from which we can begin to talk about AI powered analytics, supply chain, personalization, and more. Figure 2: Retail reference architecture with commercetools and MongoDB commercetools have been using machine learning for a long time, one of the key use cases is to help retailers easily create categories and product types automatically when they import product data sets into MongoDB. With GenAI top of mind, commercetools are looking for the first set of use cases like speeding up promotion creation- leveraging models on top of their data in MongoDB to auto-generate content for brand portfolios that matches their tone and audience. A perfect partnership This modern, data-centric, composable commerce platform is the basis of huge success for commercetools and its customers. Through innovative architecture and quick iteration on new features, commercetools has become the leading technology in its field. Their customer’s results include inspiring numbers such as: + 35% average order value, + 2X Sales Order Increase, + 40% Increase in Cross-Selling and + 100ms response time. For more reading of how MongoDB enables software companies and retailers to build architectures that align to MACH Principals: MACH Aligned for Retail (Microservices, API-First, Cloud Native SaaS, Headless) | MongoDB

August 10, 2023
Applied

Changes to the findOneAnd* APIs in Node.js Driver 6.0.0

Do you use the MongoDB Node.js driver? If so, there’s a good chance you use various find() operations regularly. MongoDB plans to release version 6.0.0 of the Node.js driver in August 2023, and we’ve made some exciting improvements to the findOneAnd* operation. With the new driver release, the modified (or original) document targeted by a findOneAnd* operation will now be returned by default. Current state Up until now, as opposed to returning the requested document, this family of API methods would return a ModifyResult , which would contain the requested document in a value field. This design was due to these APIs leveraging the MongoDB Server’s findOneAndModify command and wrapping the command’s output directly. To demonstrate, let’s adapt the code from the Driver’s documented usage examples to update one document in our movies collection using the findOneAndUpdate API. const database = client.db("sample_mflix"); const movies = database.collection("movies"); // Query for a movie that has the title 'The Room' const query = { title: "The Room" }; const updatedMovie = await movies.findOneAndUpdate(query, { $set: { "imdb.rating": 3.4, "imdb.votes": 25750 } }, { projection: { _id: 0, title: 1, imdb: 1 }, returnDocument: "after" }); console.log(updatedMovie); { lastErrorObject: { n: 1, updatedExisting: true }, value: { title: 'The Room', imdb: { rating: 3.4, votes: 25750, id: 368226 } }, ok: 1, '$clusterTime': { clusterTime: new Timestamp({ t: 1689343889, i: 2 }), signature: { hash: Binary.createFromBase64("3twlRKhDSGIW25WVHZl17EV2ulM=", 0), keyId: new Long("7192273593030410245") } }, operationTime: new Timestamp({ t: 1689343889, i: 2 }) } One of the options we set was a returnDocument of after , which should return the updated document. Though the expectation may be that the function call would return the document directly, as we can see this isn’t the case. While the document you’re looking for can be accessed using updatedMovie.value , that isn’t the most intuitive experience. But changes are on the way! What can we do right now? Starting with the Node.js Driver 5.7.0 release a new FindOneAnd*Options property called includeResultMetadata has been introduced. When this property is set to false (default is true ) the findOneAnd* APIs will return the requested document as expected. const updatedMovie = await movies.findOneAndUpdate(query, { $set: { "imdb.rating": 3.3, "imdb.votes": 25999 } }, { projection: { _id: 0, title: 1, imdb: 1 }, includeResultMetadata: false }); console.dir(updatedMovie); { title: 'The Room', imdb: { rating: 3.3, votes: 25999, id: 368226 } } What about TypeScript? If your application uses TypeScript and the MongoDB Node.js Driver, anywhere a findOneAnd* call is made, if the requested document is required it will be accessed via the value property of the ModifyResult . This occurs when includeResultMetadata is not set or when it is set to true (the current default value). Type hinting will indicate the Schema associated with the collection the operation was executed against. As we would expect, when the includeResultMetadata is changed to false inline validation will indicate there’s an issue as the value property no longer exists on the type associated with the result. Attempting to compile our TypeScript project will also fail. TSError: ⨯ Unable to compile TypeScript: index.ts:31:17 - error TS18047: 'updatedMovie' is possibly 'null'. 31 console.dir(updatedMovie.value); ~~~~~~~~~~~~ index.ts:31:30 - error TS2339: Property 'value' does not exist on type 'WithId<Movie>'. 31 console.dir(updatedMovie.value); Next Steps If you’re using the findOneAnd* family of APIs in your JavaScript or TypeScript project, upgrading the MongoDB Node.js Driver to 5.7.0+ and adding the includeResultMetadata: false option to those API calls will allow you to adapt your application to the new behavior prior to the 6.0.0 release. Once 6.0.0 is released, includeResultMetadata: false will become the default behavior. If your application relies on the previous behavior of these APIs, setting includeResultMetadata: true will allow you to continue to access the ModifyResult directly.

August 8, 2023
Updates

The MongoDB for VS Code Extension Is Now Generally Available

As of today, the MongoDB for VS Code Extension has been downloaded one million times! A huge thank you to all the developers who build with MongoDB and VS Code. We look forward to working with you to continue to improve the extension. Two million downloads, here we come! Three years ago, we introduced the MongoDB for VS Code Extension to the world in Public Preview. VS Code is the most popular Integrated Development Environment (IDE) for developers, and we were excited to bring the power of MongoDB, one of the world’s most-loved databases, to developers right in their favorite IDE. Since that time, we’ve seen skyrocketing growth in adoption of the extension, which now has over 800k installs and an average rating of 4.5 stars in the VS Code Extension store. The verdict is in: people love not only VS Code and MongoDB, but love a unified experience in the form of the MongoDB for VS Code Extension. Given the popularity of the tool and innovations we’ve continued to make in the experience, we are delighted to announce that the MongoDB for VS Code Extension is now generally available. Why use the extension? This free, downloadable extension makes it easy for developers to build applications and work application data in MongoDB directly from VS Code. Not only do you get the benefit of interacting with MongoDB data in a familiar IDE experience you’ve likely already customized to your preferences—you also can work with your application data and your application code all in one place. And with the extension now generally available (GA), you can have increased confidence in the extension and MongoDB’s long-term commitment to ongoing improvements to the experience. What the extension can do With the MongoDB for VS Code Extension, you get a single unified interface (VS Code) that you already know and love. Within the extension, you can work with your application data from MongoDB side-by-side with your application code for a more streamlined software development experience. Let’s take a look at what you can do with the extension. Connect to MongoDB After you’ve installed the extension , the first thing you’ll want to do is connect to MongoDB using a connection string. If you’re using MongoDB Atlas, you can find your connection string in the Atlas Web UI under the “Database” view by clicking the “Connect” button and then choosing VS Code as your connection option. Data exploration Within the extension, it’s easy to look at your data on MongoDB while working on your code. In the left-hand sidebar, you can easily click through databases, collections, and documents, as well as see relevant schema and indexes. Referencing both schema and indexes here during development can be helpful because: 1. By looking at the schema, you can see what fields you can query on and what their types are, and 2. You can confirm if your query is covered by an index for faster reads against the database. Playgrounds The MongoDB for VS Code Extension gives you a fully-featured JavaScript Playgrounds experience for rapid scripting and prototyping. In Playgrounds you can prototype queries, aggregations, and MongoDB commands with syntax highlighting and intelligent autocomplete. After you write your code, just hit the “play” button or use your favorite keyboard shortcut to instantly see the results of code execution. Within Playgrounds you can: Create new databases and collections Execute Create-Read-Update-Delete (CRUD) operations against your MongoDB database Prototype queries and aggregations using MongoDB’s powerful and expressive Query API Export the syntax for a given query or aggregation to your chosen programming language (including language driver syntax) You can also save Playground files together with your application code and version them in git. This is a great option for documenting all the queries and aggregations your application runs, for scripts that generate or import sample datasets to seed your development clusters, or for scripts that create indexes or define schema migrations. And because Playgrounds use the shell syntax, you can then run them programmatically with the MongoDB Shell. Access the MongoDB Shell Sometimes you just want to run a quick query or command in your terminal rather than using a fully-featured UI. The MongoDB Shell is the perfect tool for these kinds of quick data interactions, and you can access the Shell without ever leaving VS Code. Just right-click on your cluster and select “Launch MongoDB Shell” to get started with the Shell. Terraform If your team uses Terraform, you’ll probably be interested in the MongoDB Atlas Terraform Provider for building with MongoDB. The MongoDB for VS Code Extension gives you access to snippets of code for common tasks you might want to accomplish—including managing your Terraform configuration for Atlas. To use this feature, just open a Terraform file, type atlas , go through the predefined placeholders, and configure your credentials. The MongoDB for VS Code Extension lets you do all of the above - and more. To learn about all the different capabilities of the extension, check out the documentation here . New features Here’s what’s new in the extension now that it’s generally available: Autocomplete support with IntelliSense for using the MongoDB Query API, making it more intuitive to type queries and aggregations for your data on MongoDB Improvements to the Playgrounds experience to make them more reflective of a traditional JavaScript environment, including the ability to integrate them with common tools for the JavaScript ecosystem such as ESLint and Prettier Time series collections can now be created right from Playgrounds You can create column store indexes to support your analytics queries Get started today If you haven’t tried it yet, now is the time to start using the MongoDB for VS Code Extension! To install it, simply search for it in the Extensions list inside VS Code or download it from the VS Code Marketplace . Or if you’re a current user, be sure to check for updates so you get the latest version of the extension and access to the new features that come with it. As you build with the MongoDB for VS Code Extension, feel free to give us feedback on your product experience in the MongoDB Feedback Engine , so we can continue to take the pulse of the community and further optimize the extension for users.

August 8, 2023
Updates

Three Most Common Developer Skill Gaps Impacting Your Business

When we survey IT leaders about what's holding them back from adopting modern data platforms like MongoDB, skill gaps are a commonly cited top reason. And from our experience, most teams don't even realize that these skill gaps exist in the first place. The past few years have been a time of rapid change for developers, with teams looking to simplify their architecture and increase agility to build smarter apps that meet the needs of their users. To meet these demands and keep up with technology advancements like event-driven architectures, in-app analytics, and now AI, organizations have begun to look towards transformation and modernizing their legacy systems. It's critical that when organizations do these transformations, they bring their people along with them. But according to Deloitte Insights , that doesn’t always happen. They reported only 30% of workers feel supported by their company’s skill development opportunities. Figure 1: Dissatisfaction with professional development investments The skills developers are equipped with are widening from what companies require, creating a skill gap and creating risk in their businesses — leading to increased costs through performance, productivity, and security issues. To help companies close these skill gaps, MongoDB’s Instructor-Led Training team created a tool called MongoDB Skill Scanner . This tool has helped companies across industries to identify skill shortages and create data-driven training programs to improve MongoDB proficiency, increase productivity, and reduce risk and errors. Skill Scanner also helps save time and money spent on training by making these programs extremely efficient. Now that several hundred developers at a variety of organizations have used Skill Scanner, we have early data on 3 of the most common skill gaps in developers that your team should be looking out for (and providing upskill opportunities for). Skill gap #3: Aggregation The third most common skill gap identified through MongoDB Skill Scanner is aggregation . An aggregation pipeline consists of one or more stages that process documents: Each stage performs an operation on the input documents. For example, a stage can filter documents, group documents, and calculate values. The documents that are output from a stage are passed to the next stage. An aggregation pipeline can return results for groups of documents. For example, return the total, average, maximum, and minimum values. Lack of knowledge on aggregations severely limits the type of queries that developers can create with MongoDB. This means developers end up creating more logic in the application server instead of the database server making applications become harder to maintain. So what does this mean for your business? A skill shortage in this area can cause slow applications and operations by increasing the hardware size, which also creates more expenses for your team. Aggregation pipelines, especially for non-JavaScript back-ends, may seem complex at first, but with native mapping in each driver, this takes processing capabilities to the next level, without having to leave your usual language. Skill gap #2: Indexes and optimization Indexes and Optimization are the second most common skill deficit we see among developers. When an application is slow, teams will jump into action trying to diagnose what the problem might be. A common missed step is taking a step back and re-evaluating their indexing strategy, which is a fundamental element of managing any database. MongoDB supports a wide variety of index types and language-specific sort orders to support complex access patterns. Understanding how to create the right indexes for your data structure (and when to remove unnecessary indexes) is crucial for efficient query operations. A lack of knowledge in indexing and optimization can lead to a variety of technical issues including: Lack of indexes, leading to slower queries Excess or redundant indexes, which impact write performance and disk consumption These issues are a common culprit to rising TCO and slower applications. Skill gap #1: Security Lastly, by far the most common developer skill gap we saw from Skill Scanner results is Security. This is also one of the most costly errors a team can make. Compliance with security regulations avoids expensive audit findings and greatly reduces risk. Insufficient knowledge of proper security measures may lead to: Non-compliance with industry regulations resulting in fines and legal penalties Outages or disruptions, leading to downtime Reputational damage Data loss or breach Ensuring developers know the basics of the applied security best practices ensures that no one accidentally opens up security loopholes. Identifying team skill gaps Before jumping into upskilling the team, it’s critical to understand where your team stands and what skills areas they need to improve on so that not only can you close the gap, but close it in a productive, data-driven way. MongoDB Skill Scanner asks a series of multiple choice questions and then provides you with a clear understanding of each individual’s level of expertise across a set of technical skills that are critical for success working with MongoDB in their role. To get started with MongoDB Skill Scanner, contact our team here to retrieve access for yourself or your team. Closing the skill gap Instructor-Led Training empowers teams through highly relevant training content, delivered live by expert instructors to maximize productivity and equip developers and DBAs with the skills they need to succeed — leading to higher employee retention, a better user experience, and reduced errors and inefficiencies. Instructor-Led Training has courses for all skill levels and roles, meaning your team only learns what they need when they need it. We have multiple training delivery methods to match your organization's needs including Private Training (classes are delivered on your schedule, onsite or remotely), Public Training (remote-only and open to the public), and Precision Learning Programs (white-glove MongoDB training initiatives with individualized learning plans). For more information on course details, learning paths, and training schedules, view our Training Catalog . If you're interested in trying out MongoDB Skill Scanner or want to explore the MongoDB Training programs further, you can reach out to your account representative or contact us directly.

August 7, 2023
Applied

Announcing the Realm C++ Preview

Mobile edge computing, the ability to deploy compute and storage closer to the end user, has introduced a new cloud computing paradigm that requires many organizations to build distributed applications that are real-time, performant and highly engaging while working with data closer to the end user. MongoDB Atlas for the Edge offers capabilities to build, manage, and deploy distributed applications that securely use data at the edge with high availability, resilience, and reliability. As we continue on this journey towards advancing our Edge Client capabilities, we are excited to announce the release of the Realm C/C++ SDK. The Realm C/C++ SDK provides developers in industries that employ connected devices (e.g. IIoT, automotive, healthcare, retail, energy) with a comprehensive solution. Developers who use the Atlas Device Sync Edge Server will have access to network handling and conflict resolution, an essential component of managing the intermittent connectivity that is common in these spaces. Real-time use cases are also enabled thanks to the ability to store and sync data with the cloud, all while leveraging the lightweight nature of C++. As an object-oriented database that does not require a separate mapping layer or ORM (object-relational mapper), Realm is a simpler, more intuitive alternative to SQLite. This is all part of MongoDB’s mission to be a data platform that provides developers with technologies that make the development process seamless. A better way to declare object models The Realm C++ SDK Preview release introduces an improved syntax for defining object models offering an experience that is similar to interfacing with POCOs. These improvements also allow for automatic schema discovery and support for Windows operating systems: For the full list of what has been updated, check out the release notes . Grabbing a coffee with Realm and Qt at the edge During .local NYC ‘23, we demoed the process of connecting a smart coffee machine application using the Realm C++ SDK with the newly launched Price Preview release of the Atlas Device Sync Edge server. This builds on the work we have been doing with Qt and showcases the capabilities of our latest update. Check out our GitHub repository for the coffee machine application source code, and see below for a recording of the demo, as well as the architecture of the application’s functionality: Next steps As we continue to build on this release, we welcome and value all feedback. If you have any comments or suggestions, please share them via our GitHub project . Ready to get started? Head over to our Docs page for instructions on installing the C++ SDK, and register for Atlas so that you can connect to Device Sync.

August 3, 2023
Updates

Exploring Chart Types in MongoDB Atlas Charts

As you begin your chart building journey, you’ll find there are many ways for you to visualize your data in Atlas Charts . Specific data visualization needs vary by team, and we have a growing collection of chart types with various and specific purposes to help you discover insights and communicate effectively. Charts are an essential story-telling piece when working with large amounts of data. Another great way to think of it is that visualizations help condense vast data into a coherent format that makes information more consumable to a wide range of data consumers. When analyzing your data, it's important to recognize that different chart types serve distinct purposes. That is why it's important to choose the right chart type for each potential insight, so that when you put it all together, you have a diverse and all encompassing dashboard. How to effectively use Charts Charts was designed with a simple user interface that makes it quick for you to build charts and visualize your data. However, to properly utilize Charts, this guide on chart types can give you extra help on making charts more quickly and efficiently. Our chart types are split into the following series: Column and Bar Charts Line and Area Charts Combo Charts Grid Charts Circular Charts Text Charts, and Geospatial Charts Determining the best chart type can be an overwhelming task when there are so many to pick from, but knowing the specific strengths of each chart type can help you select the right chart for your use case. Most common chart types in Atlas Charts 1. Data tables What is a data table? Data tables are used to organize data in a tabular view, ultimately allowing viewers to quickly read the results of detailed data. What is an example use case for a data table? A data table can be used for healthcare system applications, where users can store patient information and records, medical history and treatment plans, and enable healthcare professionals to access patient data more easily and effectively. 2. Number charts What is a number chart? Number charts display a single aggregated value from a data field, often representing a grand total or overall state of data. What is an example use case for number charts? A number chart can be used for social media analytics, where engagement metrics, subscriber count, and post performance is summarized for users to track account growth. 3. Grouped column and bar charts What is a grouped column and bar chart? Grouped column and bar charts are used to show detailed data distribution across categories instead of a singular category. What is an example use case for grouped column and bar charts? To analyze financial performance, a grouped column and bar chart would be useful for viewing revenue, expenses, and profits of multiple business units over a period of time. 4. Donut charts What is a donut chart? Donut charts display the proportional distribution of a dataset, often used to showcase the general trends of data instead of exact data values. What is an example use case for donut charts? To track website traffic or customer churn rates, a donut chart is useful to visualize the proportion of website visitors coming from various sources and the percentage of those visitors who have churned or stayed with the company over a period of time. These are a few of the most commonly used chart types in Charts. Now let’s walk through some less common chart types to enrich your data visualization toolkit. Chart types you might have not used in Charts before 1. Line and area charts What is a line and area chart? Line and area charts display a series of data points connected by straight line segments. For area charts specifically, the space beneath the segments are filled with color.. Both of these chart types are used to track trends over time, such as sales and stock prices, or website traffic. What is an example use case for line and area charts? A line and area chart can be used for e-commerce applications, to show sales performance, revenue growth, and profitability trends over specific time intervals. 2. Stacked column charts What is a stacked column chart? Stacked column charts are used to show the composition and comparison of multiple variables over a period of time. They visually look like a series of columns stacked on top of one another, and most useful for analyzing changes across several categories. What is an example use case for stacked column charts? A stacked column chart can be used for product comparison, where the features, prices, and user ratings of various products or services are compared to one another side by side. 3. Geospatial charts What is a geospatial chart? Geospatial charts are map-based charts that are created from geospatial data and other forms of data to define specific geographical locations in the form of latitude and longitude coordinates or text fields with country and state names. Atlas Charts allows users to visualize geospatial data in three different chart formats: choropleth, scatter, and heatmap. What is an example use case for geospatial charts? A geospatial chart can be used for environmental monitoring, where soil and air quality data, pollution levels, deforestation rates, and other environmental factors are analyzed to locate areas for conservation. 4. Heatmaps What is a heatmap? Heatmaps are used to show relationships between two variables, showcased in a tabular format as a range of colors. Darker, more intense shades represent larger aggregated values while lighter shades represent smaller aggregated values across the dataset. What is an example use case for heatmap charts? A heatmap chart can be used for user behavior analytics, where user interactions, clicks, and total engagement across different web pages are tracked and monitored to improve customer experience. Now you have an idea of the many chart types, common and uncommon, that are available to you in Atlas Charts. Now it’s time to give it a try! Use your own data, or some of MongoDB’s sample datasets, to practice what you’ve learned and implement your next charting option! Log in to Atlas Charts today to create your visualizations! New to Atlas Charts? Get started today by logging into or signing up for MongoDB Atlas .

August 2, 2023
Applied

Introducing the Aggregation Stage Wizard in MongoDB Compass

Have you ever wanted to create an aggregation but not known where to start? We’ve got a solution for you! MongoDB just released a new feature that can help as part of our powerful aggregation experience in Compass . Starting in version 1.38, Compass’ new Aggregation Stage Wizard will help you jumpstart aggregation development by allowing you to craft aggregation stages based on your use case. Getting started with an aggregation stage is often the hardest part. Although Compass provides a variety of editors for writing aggregations, you previously needed prior experience with MongoDB Query API syntax to get started. Otherwise, you had to rely on documentation or code examples to guide you through the process. The Aggregation Stage Wizard addresses this common challenge. Once you know what you’d like to accomplish with your aggregation stage, you can click the Wizard icon and drag the corresponding use case to your pipeline. The Aggregation Stage Wizard then converts your use case into an aggregation stage. From there, you can enter the appropriate fields, values, and operators through a series of dropdowns and text boxes–no need to agonize over quotations and curly braces! The Aggregation Stage Wizard will convert your entries into a valid aggregation stage written in MongoDB’s Query API syntax. From there, you can feel free to expand on your aggregation stage with the foundation set by the Aggregation Stage Wizard. In using the Aggregation Stage Wizard, you’ll naturally and interactively learn how to develop aggregation stages so that pretty soon, working directly in Query API syntax will be second-nature for you. To use the Aggregation Stage Wizard, please be sure to download the latest version of Compass . We also value your continued feedback. If you have any feedback about the Aggregation Stage Wizard, new use cases you’d like to see supported, or ideas for improving Compass more generally, please submit your feedback . We’re continually improving Compass. Keep watching our blog for the latest updates!

August 2, 2023
Updates

Real-Time Inventory Tracking with Computer Vision & MongoDB Atlas

In today’s rapidly evolving manufacturing landscape, digital twins of factory processes have emerged as a game-changing technology. But why are they so important? Digital twins serve as virtual replicas of physical manufacturing processes, allowing organizations to simulate and analyze their operations in a virtual environment. By incorporating artificial intelligence and machine learning, organizations can interpret and classify objects, leading to cost reductions, faster throughput speeds, and improved quality levels. Real-time data, especially inventory information, plays a crucial role in these virtual factories, providing up-to-the-minute insights for accurate simulations and dynamic adjustments. In the first blog , we covered a 5-step high level plan to create a virtual factory. In this blog, we delve into the technical aspects of implementing a real-time computer vision inventory inference solution as seen in Figure 1 below. Our focus will be on connecting a physical factory with its digital twin using MongoDB Atlas, which facilitates real-time interaction between the physical and digital realms. Let's get started! Figure 1: High Level Overview Part 1: The physical factory sends data to MongoDB Atlas Let’s start with the first task of transmitting data from the physical factory to MongoDB Atlas. Here, we focus on sending captured images of raw material inventory from the factory to MongoDB for storage and further processing as seen in Figure 2. Using the MQTT protocol, we send images as base64 encoded strings. AWS IoT Core serves as our MQTT broker, ensuring secure and reliable image transfer from the factory to MongoDB Atlas. Figure 2: Sending images to MongoDB Atlas via AWS IoT Core For simplicity purposes, in this demo, we directly store the base64 encoded image strings in MongoDB documents. This is because each image received from the physical factory is small enough to fit into one document. However, this is not the only method to work with images (or generally large files) in MongoDB. Within our developer data platform , we have various storage methods, including GridFS for larger files or binary data for smaller ones (less than 16MB). Moreover, object storage services like AWS S3 or Google Cloud Storage, coupled with MongoDB data federation are commonly used in production scenarios. In this real-world scenario, integrating object storage services with MongoDB provides a scalable and cost-efficient architecture. MongoDB is excellent for fast and scalable reads and writes of operational data, but when retrieving images with very low latency is not a priority, the storage of these large files in ‘buckets’ helps reduce costs while getting all the benefits of working with MongoDB Atlas. Robert Bosch GmbH , for instance, uses this architecture for Bosch's IoT Data Storage , which helps service millions of devices worldwide efficiently. Coming back to our use case, to facilitate communication between AWS IoT Core and MongoDB, we employ Rules defined in AWS IoT Core, which helps us send data to an HTTPS endpoint. This endpoint is configured directly in MongoDB Atlas and allows us to receive and process incoming data. If you want to learn more about MongoDB Data APIs, check this blog from our Developer Center colleagues. Part 2: MongoDB Atlas to AWS SageMaker for CV prediction Now it’s time for the inference part! We’ve trained a built-in multi-label classification model provided by Sagemaker, using images like in Figure 3. The images were annotated with using an .lst file format following the schema: So in an image where only the red and white pieces are present, but no blue is present in the warehouse, we would have an annotation such as: Figure 3: Sample image used for the Computer Vision model The model was built using 24 training images and 8 validation images, which was a simplicity-based decision to demonstrate the capabilities of the implementation rather than building a powerful model. Regardless of the extremely low training/validation sample, we managed to achieve a 0.97 validation accuracy. If you want to learn more about how the model was built, check out the Github repo . With a model trained and ready to predict, we created a model endpoint in Sagemaker where we send new images through a POST request so it answers back with the predicted values. We use an Atlas Function to drive this functionality. Every minute, it grabs the latest image stored in MongoDB and sends it to the Sagemaker endpoint. It then waits for the response. When the response is received, we get an array with three decimal values between 0 and 1 representing the likelihood of each piece (blue, red, white) being in the stock. We interpret the numeric values with a simple rule: if the value is above 0.85, we consider the piece being in stock. Finally, the same Atlas function writes the results in a collection (Figure 4) that keeps the current state of the inventory of the physical factory. More details about the function here . Figure 4: Collection storing real time stock status of the factory The beauty comes when we have MongoDB Realm incorporated on the Virtual Factory as seen in Figure 5. It’s automatically and seamlessly synced with MongoDB Atlas through Device Sync. The moment we update the collection with the inventory status of the physical factory in MongoDB Atlas, the virtual factory, with Realm, is automatically updated. The advantage here, besides not needing to include any additional lines of code for the data transfer, is that conflict resolution will be handled out of the box and when connection is lost, the data won’t be lost and rather updated as soon as the connection is re-established. This essentially enables a real-time synchronized digital twin without the hustle of managing data pipelines, configuring your code for edge cases and lose time in non-competitive work. Figure 5: Connecting Atlas and Realm via Device Sync Just as an example of how companies are implementing Realm and Device Sync for mission-critical applications: The airline Cathay Pacific revolutionized how pilots logged critical flight data such as wind speed, elevation, and oil pressure. Historically, it was done manually via pen and paper until they switched to a fully digital, tablet-based app with MongoDB, Realm, and Device Sync. With this, they eliminated all papers from flights and did one of the first zero-paper flights in the world in 2019. Check out the full article here . As you can see, the combination of these technologies is what enables the development of truly connected, highly performant digital twins within just one platform. Part 3: CV results are sent to Digital Twin via Device Sync In the process of sending data to the digital twin through device sync, developers can follow a straightforward procedure. First, we need to navigate to Atlas and access the Realm SDK section. Here, they can choose their preferred programming language and the data models will be automatically pre-built based on the schemas defined in the MongoDB collections. MongoDB Atlas simplifies this task by offering a copy-paste functionality as seen in Figure 6 , eliminating the need to construct data models from scratch. For this specific project, the C# SDK was utilized. However, developers have the flexibility to select from various SDK options, including Kotlin, C++, Flutter, and more, depending on their preferences and project requirements. Once the data models are in place, simply activating device sync completes the setup. This enables seamless bidirectional communication. Developers can now send data to their digital twin effortlessly. Figure 6: Realm C# SDK Object Model example One of the key advantages of using device sync is its built-in conflict resolution capability. Whether facing offline interruptions or any conflicting changes, MongoDB Atlas automatically manages conflict resolution. The "Always on '' feature is particularly crucial for Digital Twins, ensuring constant synchronization between the device and the MongoDB Atlas. This powerful feature saves developers significant time that would otherwise be spent on building custom conflict resolution mechanisms, error-handling functions, and connection-handling methods. With device sync handling conflict resolution out of the box, developers can focus on building and improving their applications. They can be confident in the seamless synchronization of data between the digital twin and MongoDB Atlas. Part 4: Virtual factory sends inventory status to the user For this demonstration, we built the Digital Twin of our physical factory using Unity so that it can be interactive through a VR headset. With this, the user can order a piece on the physical world by interacting with the Virtual Twin, even if the user is thousands of miles away from the real factory. In order to control the physical factory through the headset, it’s crucial that the app informs the user whether or not a piece is present in the stock, and this is where Realm and Device Sync come into play. Figure 7: User is informed of which pieces are not in stock in real time. In Figure 7, the user intended to order a blue piece on the Digital Twin and the app is informing that the piece is not in stock, and therefore not activating the order neither on the physical factory nor its digital twin. What’s happening behind on the backend is that the app is reading the Realm object that stores the stock status of the physical factory and deciding if the piece is orderable or not. Remember that this Realm object is in real-time sync with MongoDB Atlas, which in turn is constantly updating the stock status on the collection in Figure 4 based on Sagemaker inferences. Conclusion In this blog, we presented a four-part process demonstrating the integration of a virtual factory and computer vision with MongoDB Atlas. This solution enables transformative real-time inventory management for manufacturing companies. If you're interested in learning more and getting hands-on experience, feel free to explore our accompanying GitHub repository for further details and practical implementation.

August 1, 2023
Applied

The Great Data Divide: Here's What's Hindering Your AI Goals

Organizational data is arguably the lifeblood of most digital-era companies. And yet, despite its significance and importance to the organization as a whole, the creation and subsequent management of data in most organizations are bifurcated - split between whether the data is transactional or analytical (operational vs after-the-fact & historical). Between these two worlds, a great divide exists. Like the Equator line which divides our planet into northern and southern hemispheres, many of our organizations operate with separate transactional and analytics hemispheres. Rooted in hardware and software limitations, transactional and analytics data processing workloads are run against different systems and hardware, which are run by separate teams as well. While this has been an effective strategy for managing organizational data assets for a very long time, advances in hardware and software, and the availability of cloud infrastructure, have changed that. When it comes to an organization's ability to deploy AI at scale now, we need to change this approach towards processing and managing data if we want to deliberately increase our organization's overall data processing proficiency. We’re here to suggest a different operating model for consideration, one that’s based on the collective experiences of working with over 40K data-processing customers, many of whom are leading the way when it comes to reorganizing themselves for high data proficiency, to support their AI ambitions and programs. Treating data as a product Let’s erase the line between transactional and analytics for a moment, and instead, view the overall flow and use of information within an organization. It’s created, it’s updated, and it’s read by employees, customers, data & analytics workers, and executives. Sometimes it finds itself inside an application, sometimes it manifests itself on a month-end report. Sometimes it’s used to train and retrain machine learning models. It’s this last scenario that’s starting to reveal significant deficiencies associated with traditional methods of managing data found within many organizations. Thanks to things like mobile, cloud, and IoT, data is moving at a breakneck pace. 40 years ago, we primarily transacted in a business application and then shuttled the deltas overnight into a data warehouse. Why? Because it was simply not possible to execute analytics queries against a running transactional system. At best, the queries would time out. At worst, you would slow or halt business transactional processing, and bring the business to a stand-still. In addition, all analytics were after-the-fact. We didn’t need to try and execute analytics queries against transactional systems. A single enterprise data warehouse repository was good enough to satisfy the reporting demands the organization placed on the data. Today, however, our historical data assets are becoming ever more significant, and sometimes even within real-time transactional business processing. Insights that can be gleaned from historical data, can be fed into decision-making transactional systems, to drive better, or more efficient outcomes. Think of automated decisions and inferences. Machine learning models are now supplementing some of the data analysis and decision-making that humans have traditionally had to perform. As the benefits from these models become more commonplace within transactional business systems, it’s important that they make accurate decisions, especially in heavily regulated industries such as insurance and financial services. A machine-learning model, as such, may need to be retrained often, and many models now demand access to data that is real-time, or as near real-time as possible. It’s this hunger for data that is causing AI models to cross over the great data equator. Not being satisfied with historical data, these models are increasingly demanding to be trained and retrained on data that is as fresh from having been created or updated, as possible. When we treat our data as a product, we see it as a thing, a business entity, or a noun. A customer, a policy, a claim, etc. However, it also has characteristics like state, age, and context. Is it in motion, or is it at rest? Has it just been created? Is it in the process of being updated, or is it years old, sitting in a warehouse? For which business context is it being leveraged - a customer browsing products, or a data scientist looking for trends in past sales? Across all of these characteristics and contexts, the data itself isn’t any more or less important. It’s simply important because it’s the data. Worlds apart When we task entirely separate teams, however, to manage it - transactional vs analytics - we lose this holistic data-as-a-product perspective. Instead, we put very different lenses on, whether we’re looking at a software delivery team, vs a data engineering team supporting data scientists. The meaning of data, after it’s transacted, for example, may change once it’s landed in the enterprise data warehouse or data lake. Transformations and manipulations are applied to it as it crosses over the great data equator, sometimes creating very different instances of the data. The journey often alters it from its original ground-truth state, done so while in between being copied from a transactional database, and loaded into an analytics one. After that data lands in analytics databases and platforms, it’s often further transformed and copied into even more subsequent databases and platforms. For the past decade, most AI efforts have been executed within the analytics hemisphere. Historical data assets in our data warehouses and data lakes have been sufficient to serve experiments and even production AI use cases. The more AI becomes commonplace, however, the more we can expect that AI models will want both historical and real-time data. As such, we should be re-aligning our bifurcated transactional and analytics organizations to help them operate as efficiently as possible, to serve the right state of the data to the right consumer, for the right context. Uniting with Domain Driven Design Some of the best things that have come from software delivery organizations embracing Domain Driven Design come from aligning developers, architects, business SMEs, and scrum masters into the same team, or team of teams. A bounded context in which all the folks who care about, interact with, or manipulate the software and the data, can work together without having to cross departmental boundaries, or bureaucracy that can cause friction when trying to deliver working software. If we consider the goal of being highly proficient and effective with data, especially complicated data (data that has fast changed state and context), it stands to reason that an Agile team of teams, or Bounded Context, should include not only the business SME’s, the software developers, and the architects and site reliability engineers (SRE’s) who maintain applications, but also the data engineers and the data scientists who currently manage after-the-fact data assets, and are using it to bring AI models to life. If we truly want to embrace and treat data as a product, however, we need to eradicate the notion that data should be managed in two different hemispheres across the organization - transactional and analytics. The data will change state, often, and only continue to do so for the foreseeable future. Engineering the organization for success - efficiency, and accuracy when it comes to data processing - requires deliberateness. For that, we have to actively seek out and make our goals happen. Those goals should be focused on removing known friction points. The junctions at which the exchange, or processing of information is inefficient, struggling to scale, costing too much effort and dollars, or all of the above. All hands on deck When it comes to building sophisticated digital applications, when it comes to managing data (in whatever state or context), when it comes to building and maintaining AI models, when it comes to incorporating those models into actual business workflows and applications, it truly takes a village. As AI begins to accelerate the ability to write and deploy code, for example, the pace of application feature delivery in most organizations will increase. In short, we’re going to be expected to do more, in less time, thanks to the forthcoming generation of AI-enabling assistants. This will place even greater demands and expectations on the organization's technology and data workers, and especially the data infrastructure. Similarly, as AI models consume either real-time or historical data, our ability to accurately, efficiently, and quickly process and manage all of this data will need to increase significantly. The way forward Aligning people and resources to common goals is an effective way to transform an organization. Setting goals like treating data as a product, and embracing principles of domain-driven when it comes to an organization’s data-engineering practices, can help tremendously in moving towards more accurate, efficient, and performant data processing. In organizations we work with, large and small, this transformation is beginning, and it’s erasing the hard line that’s existed between two distinct data hemispheres in the organization. As AI becomes more significant, so do your developers, data scientists, and data engineers. We need them working together as efficiently and effectively as possible, to meet our organization’s aspirations. A way to achieve this comes from reducing the friction when it comes to working with data - for developers, data scientists, and AI models alike. We invite you to have a conversation with us about your goals. We’d love to help you increase your organization's overall data processing power, and unleash the power that truly comes from software and data.

August 1, 2023
Applied

Serving as the Digital Bridge: Meet the APIx Team at MongoDB

Meet MongoDB’s API Experience (APIx) team, the innovative group that connects our customers with our products. Their work is no small task; they operate at the intersection of technology and customer experience, ensuring that the product and user experience remains integrated, efficient, and effective. Keep reading to learn how APIx is making an impact and what it means to be part of this growing team. Jackie Denner: Thank you for joining me today to share insights into our APIx team's work. To start, will you give an overview of your software engineering background and how you started working with MongoDB? Colm Quinn: I come from a start-up background. My experience includes industrial automation, particularly in the development of time-series databases and real-time analytics tools for production data. My work spans various industries such as pharmaceuticals, oil and gas, renewable energy, and manufacturing. Throughout my career, I've adopted various roles, from development to customer relations, often serving as a bridge between Product and Engineering teams. I sought a new challenge and the opportunity to enhance my skills in system scaling in larger production environments, leading me to join MongoDB. Now, I serve as the Director of Engineering for the APIx team. Tasos Piotopoulos: In my nearly two-decade journey in the tech industry, I've explored a wide array of domains including gaming, consulting, healthcare, logistics, and site reliability. MongoDB invited me to join as a Lead Engineer for one of the APIx teams, an exciting role that combines management with hands-on technical work. This opportunity allowed me to utilize my expertise in large-scale distributed systems while nurturing my passion for fostering professional growth in others. The MongoDB interview process impressed me because I actually got to meet the team members I’d be working with, and everyone was friendly, knowledgeable, and great to collaborate with. Bianca Lisle: My experience as a software engineer has been diverse and exciting, and includes experience in IoT, automotive networks, and Android development. Additionally, I’ve worked extensively with the Control Plane of Redis and in-memory databases cloud services. MongoDB’s recruitment process and culture were delightful and positively influenced my decision to join. Currently, I work as a software engineer on the APIx team. JD: Thanks for the overviews! Tell me more about the APIx team. What types of projects does the team work on? CQ: In the APIx team, we strive to build a reliable and predictable API platform that caters to both external customers and our internal teams. A key part of our role is meeting customers where they are, considering their DevOps world, and integrating access into their platforms. We work closely together on different aspects of the API, which allows for comprehensive internal testing before the updates reach the customers. TP: The APIx team consists of three distinct units, one of which is the newly formed API Integrations. Collectively, we’re responsible for providing a world-class experience for users who interact programmatically and automate against Atlas, our cloud-native data platform. Two of our APIx units shoulder an array of Atlas API-related responsibilities. These range from the auto-generation of technical specifications and Software Development Kits, managing API versioning to shield customer applications from disruption due to platform updates, creating a comprehensive command-line interface enabling customers to interface with Atlas from their terminals, and more. Operating on a more overarching scale, the APIx Integrations unit designs a range of products that elegantly integrate with Atlas APIs, facilitating customer automation against Atlas's functionalities using leading infrastructure as code solutions. BL: The APIx team is in a unique position as an interface between the Atlas product and the customers. We work to protect customers from breaking changes in the API and also to assist our developers in avoiding breaking changes. Recently, we worked on a project related to versioning which allows the introduction of new features without impacting the customer experience. JD: APIx Integrations is a new team at MongoDB. What does the product direction look like? CQ: Our initial challenge was to ensure a consistent journey across all our integrations. That includes ensuring that different tools like AWS CloudFormation and HashiCorp TerraForm work in a consistent manner, are idiomatic, and follow similar documentation styles. Going forward, we aim to understand the DevOps ecosystem trends and the tools our customers want to use. We want to enhance our product offerings by going deeper, offering specific features for each platform that address common pain points. We're also seeking to broaden our scope by supporting more integrations based on market needs while maintaining consistency and ease of maintenance. Finally, we aim to improve our platform by automating and building tooling to keep pace with market changes. If you want to build systems, come do it with us! JD: What is the engineering team hoping to achieve with APIx Integrations? CQ: Our main goal is to increase the quality of existing and new integrations. We focus a significant part of our automation effort on maintaining consistent quality and preventing regressions in the system. We're also focused on user acquisition and gaining insights into how customers use the integrations, which can help us design better integrations in the future. We're dedicated to empathizing with our users, understanding their pain points, and working towards alleviating them. This work involves scaling up and improving our automation. It's also a great opportunity for our team members to develop their skills and grow, which aligns with our team culture. JD: Tell me about the APIx team culture. BL: I've found that we have a culture that strongly encourages questioning and learning. We've established a safe environment where everyone feels comfortable asking questions, regardless of their complexity or nature. We publicly communicate, and this openness allows the whole team to benefit from the shared information. It's an incredibly supportive group - we're never blocked for long periods due to obstacles, as there's always someone ready to help. The team's culture also gives everyone a voice. Even as a new hire, I felt encouraged to propose changes. We're open to experimentation and willing to adjust our processes based on what works best for the team. We also get the opportunity to share our work and ideas with the community, like collaborating on this blog post or participating in podcast discussions . It's an incredibly open, supportive, and dynamic team to be a part of. CQ: Our team culture is extremely collaborative. We work closely with each other, and our relationships with product managers foster a lot of ideation and discussion. Our approach to work involves rapid ideation, swift documentation, and making sure we're all on the same page before proceeding with development. We are a remote-friendly team, prioritizing support for people wherever they work. Quality is a critical aspect of our work; we prefer to delay a feature to meet our quality bar, and this results in high-quality work that the team members appreciate and take pride in. We also strive to enjoy what we do and the environment we work in. This is a reflection of MongoDB's overall culture, which is open, inviting, and encouraging for everyone to be themselves at work. We respect diversity and different viewpoints, as these contribute to better feedback and conversations. JD: Tell me more about your experience with the overall engineering culture at MongoDB. What has your experience working with the greater engineering team been like? TP: MongoDB's engineering culture embodies a profound commitment to technology as the driving force behind our work. It is refreshing to witness the genuine understanding and appreciation of technology at all levels of leadership. Also, working alongside exceptionally talented individuals at MongoDB has been a constant inspiration and motivation. The people at MongoDB are truly outstanding, making collaboration an absolute pleasure. BL: The engineering culture at MongoDB is transparent. Regular all-hands meetings with the company's leadership, including the CEO, keep everyone updated about the company's plans and direction. I also like that technical competence that runs across all leadership levels. This technical grounding allows realistic expectations and strategic trade-offs, protecting our high-quality output. CQ: The engineering culture at MongoDB has technical acuity across all levels, including senior management. The depth of technical discussions, whether it involves engineers, product managers or even salespeople, is something that pleasantly surprised me when I joined. With our primary audience being developers, we need a team with a strong set of technical skills. The work culture is incredibly friendly and supportive. TP: Additionally, our strong product management organization significantly enriches our engineering work output. MongoDB’s product managers are excellent at listening to customer needs, conducting market research, and holding user interviews before and after we develop a product. This provides us with invaluable insights to gauge interest, understand user needs, produce highly-impactful features, and allows us to continue refining our products post-development. The constant high-quality collaboration between these two areas has been a real growth opportunity. JD: What learning and growth opportunities are there for someone who joins the APIx team? CQ: Our team is constantly growing, and with this growth comes many chances to explore new areas and hone our skills. We have a targeted focus within the team to dedicate time to areas we're interested in, and we even have policies like no-meeting Wednesdays to make room for learning and growth. We also engage with the open-source community, with over half of our contributions being open-source. This allows us to integrate with a wider community, share ideas, and even speak at conferences. TP: We place a significant emphasis on growth as a central aspect of our engineering experience. We aim to provide a workspace where engineers have ample opportunities to think, read, experiment, and learn. We offer systematic coaching, weekly learning opportunities, discussions, and personal development plans. Our leads encourage each engineer to spend time on self-learning and development as part of their work. It's not just about delivering work but also about creating a nurturing environment where engineers can continuously grow with explicit support and guidance from their leads. BL: If you’re new to the APIx team, we want you to feel comfortable being yourself. Don't hesitate to ask questions, no matter how trivial they might seem. Your unique perspective could lead to improvements in our team. We encourage open communication, expressing your thoughts, and being proactive in learning about our challenges. By collaborating to solve our problems, we can elevate our team to the next level. JD: What advice would you give to someone considering applying to an open position on the APIx team? CQ: Initially, I'm interested in understanding how you've made an impact in your previous roles. When you’re in the interview process, remember that everyone in the room wants you to succeed. We’re looking for alignment in terms of how you approach situations and whether you would be happy on our team. The best way for you to succeed is to find a role where you'll genuinely enjoy the work. We’re hoping you find that place on our team! TP: MongoDB operates at an immense scale, a characteristic that might initially appear daunting, especially if you've not worked on systems of such a magnitude. Don’t let the scale discourage you from applying. We provide comprehensive onboarding training, ensuring you acquire familiarity with our practices and establish effective collaboration with colleagues. It's an incredible learning opportunity that allows you to grow both personally and professionally, and make an impact. Learn more about what our APIx team is working on: Atlas Administration API Partner Integrations Atlas CLI Atlas and AWS CloudFormation Interested in transforming your career at MongoDB? Find open roles on our engineering team .

July 27, 2023
Culture

AWS Names MongoDB as Taiwan ISV Partner of the Year

On July 27, during the 2023 AWS Partner Summit in Taiwan , MongoDB was recognized as the ISV Partner of the Year for Taiwan. The Amazon Web Services (AWS) Partner Awards recognize partners based on merit, highlighting their dedication to helping customers drive innovation and build solutions. The awards are given to celebrate outstanding achievements by AWS partners. MongoDB won the Taiwan ISV Partner of the Year due to its remarkable success leveraging AWS services and partner programs to grow and expand their software offerings. With the increasing numbers of AWS customers across verticals and segments deploying mission critical workloads on MongoDB’s developer data platform Atlas, running on AWS, MongoDB has demonstrated exceptional achievements that outperformed other partners. Following an AWS Marketplace first approach in Taiwan, MongoDB concluded more transactions via AWS Marketplace than any other partner and is also aligning their channel strategy with AWS to scale through the AWS Marketplace channel programs. "AWS is committed to building a dynamic and rapidly expanding partner network, empowering partners to innovate and grow sustainably through leading cloud technology services and global resources. Over the past year, we are delighted to see MongoDB helping customers in various industries to overcome their challenges and create business value. We look forward to closely collaborating with our partners as we continue to support customer innovation in 2023 and drive digital transformation in Taiwan,” said Robert Wang, Managing Director, AWS Hong Kong and Taiwan. “Thank you to AWS for recognizing MongoDB as the ISV Partner of the Year. This award demonstrates our commitment to delivering superior solutions for customers and recognizes the amazing work of the team in the past year,” said Gabriel Woo, Regional Vice President, Hong Kong, Taiwan & Macau at MongoDB. “The collaboration between AWS and MongoDB extends beyond opportunities. We will continue to work closely together to drive innovation and growth in the region.”

July 27, 2023
News

Ambee's AI Environmental Data Revolution: Powered by MongoDB Atlas

Ambee , a fast-growing climate tech start-up based in India, is making waves in the world of environmental data with its mission to create a sustainable future. With over 1 million daily active users, Ambee provides proprietary climate and environmental data-as-a-service to empower governments, healthcare organizations, and private companies to make informed decisions about their policies and business strategies. Their comprehensive data encompasses emissions, pollen levels, air quality, soil conditions, and more, all crucial for driving climate action while positively impacting businesses’ bottom lines. Ambee's pollen and air quality map From the outset, MongoDB Atlas has been at the core of Ambee's database architecture, supporting their AI and ML models. Ambee needed something that could manage a vast and diverse data set. MongoDB's flexible document model proved to be a perfect fit, enabling them to store all their data in one centralized location and operationalize it for various use cases. On average, Ambee adds around 10 to 15GB of data every hour. A significant advantage of MongoDB for Ambee lies in its ability to handle geospatial data, a critical element for their work. With data sourced from satellites, paid data providers, soil readings, airplanes, proprietary IoT devices, and much more, Ambee relies on MongoDB's geospatial capabilities to provide accurate and granular geographical insights. This precision is one of Ambee's key differentiators, setting them apart in the industry. Ambee's use of artificial intelligence adds another layer of value to their data services. By running AI models on MongoDB Atlas, they not only deliver data-as-a-service to their clients but also provide intelligent recommendations. Ambee's AI-driven platform, Ambee AutoML, serves as a central repository, enabling developers with limited machine learning expertise to train high-quality models. This democratization of machine learning empowers a broader audience to harness its potential, crucial in Ambee’s aim to fight climate change with data. The practical application of Ambee's AI and data services is amazing. Ambee's data powers many companies across the Fortune 500, including Boots, Kimberly Clark, and many more, to support a variety of use cases. Be it personalized marketing or digital healthcare, Ambee's datasets have helped businesses worldwide achieve remarkable results. For instance, Boots, a leading British health and beauty retailer, uses Ambee's data to identify regions where pollen and environmental factors trigger allergies. AI recommendations help allocate resources efficiently, enabling Boots to mitigate the impact of allergies and enhance their bottom line while aiding more individuals in need. Ambee has also made a US pollen and air quality map publicly available for anyone to check. In another use case, Ambee employs AI models to forecast forest fires and their potential outcomes in the U.S. and Canada, providing organizations with critical warnings to protect lives and property in wildfire-prone areas. Ambee's forest fire dashboard Ambee's future looks promising as they continue to grow covering more regions and incorporating more data, all of which makes its AI-powered services more powerful. The company's APIs are designed for ease of use, and to be as simple as possible for developers to get started with. This ease of use and extensive documentation is helping drive the popularity of the service. The MongoDB-powered APIs are getting more than 10 million calls every day. Madhusudhan Anand, CTO Ambee said: "Our work with MongoDB Atlas showcases how we can create a sustainable future by providing easily accessible environmental data. MongoDB's unique capabilities in handling diverse data and geospatial information have been instrumental in our success. Together, we are shaping a greener world." Extensive API Documentation As Ambee's popularity and impact continue to grow, its suite of data-driven products is expanding substantially. The company will soon be launching a number of sophisticated tools and platforms to help businesses take their operations to the next level. The next big piece will be C6—a carbon management and accounting platform. Ambee aims to help companies measure, report, and reduce their digital emissions. This will be followed by a programmatic advertising tool that can run campaigns based on environment triggers. All of which will be powered by MongoDB Atlas. And to unlock these innovative AI solutions, Ambee's team is looking to take advantage of the full developer data platform. For example, using MongoDB Atlas Federated Queries and Atlas search to make 70TB of exclusive environmental data operational.

July 26, 2023
Applied

Ready to get Started with MongoDB Atlas?

Start Free