This document provides a collection of hard and soft limitations of the MongoDB system. The limitations on this page apply to deployments hosted in all of the following environments unless specified otherwise:
The following limitations apply only to deployments hosted in MongoDB Atlas. If any of these limits present a problem for your organization, contact Atlas support. MongoDB Atlas limits concurrent incoming connections based on the cluster tier and . MongoDB Atlas connection limits apply per node. For sharded clusters, MongoDB Atlas connection limits apply per router. The number of routers is equal to the number of replica set nodes across all shards. Your read preference also contributes to the total number of connections that MongoDB Atlas can allocate for a given query. MongoDB Atlas has the following connection limits for the specified cluster tiers: NoteMongoDB Atlas reserves a small number of connections to each cluster for supporting MongoDB Atlas services. If you're connecting to a multi-cloud MongoDB Atlas deployment through a , you can access only the nodes in the same cloud provider that you're connecting from. This cloud provider might not have the node in its region. When this happens, you must specify the mode in the connection string to access the deployment. If you need access to all nodes for your multi-cloud MongoDB Atlas deployment from your current provider through a private connection, you must perform one of the following actions:
While there is no hard limit on the number of collections in a single MongoDB Atlas cluster, the performance of a cluster might degrade if it serves a large number of collections and indexes. Larger collections have a greater impact on performance. The recommended maximum combined number of collections and indexes by MongoDB Atlas cluster tier are as follows: MongoDB Atlas Cluster Tier Recommended Maximum
5,000 collections and indexes
0 /
1 10,000 collections and indexes
2/+ 100,000 collections and indexes MongoDB Atlas deployments have the following organization and project limits: Component Limit Database users per MongoDB Atlas project 100 Atlas users per MongoDB Atlas project 500 Atlas users per MongoDB Atlas organization 500 API Keys per MongoDB Atlas organization 500 Access list entries per MongoDB Atlas Project 200 Users per MongoDB Atlas team 250 Teams per MongoDB Atlas project 100 Teams per MongoDB Atlas organization 250 Teams per MongoDB Atlas user 100 Organizations per MongoDB Atlas user 250 per MongoDB Atlas user 50 Clusters per MongoDB Atlas project 25 Projects per MongoDB Atlas organization 250 Custom MongoDB roles per MongoDB Atlas project 100 Assigned roles per database user 100 Hourly billing per MongoDB Atlas organization $50 per MongoDB Atlas project 25 Total Network Peering Connections per MongoDB Atlas project 50. Additionally, MongoDB Atlas limits the number of nodes per based on the CIDR block and theregionselected for the project. Pending network peering connections per MongoDB Atlas project 25 addressable targets per region 50 addressable targets per region 150 Unique shard keys per MongoDB Atlas project 40 Atlas Data Lakepipelines per MongoDB Atlas project 25
3 clusters per MongoDB Atlas project 1 MongoDB Atlas limits the length and enforces ReGex requirements for the following component labels: Component Character Limit RegEx Pattern Cluster Name 64
4 Project Name 64
5 Organization Name 64
5 API Key Description 250 Additional limitations apply to MongoDB Atlas serverless instances, free clusters, and shared clusters. To learn more, see the following resources:
Some MongoDB commands are unsupported in MongoDB Atlas. Additionally, some commands are supported only in MongoDB Atlas free clusters. To learn more, see the following resources:
BSON Document Size The maximum BSON document size is 16 megabytes. The maximum document size helps ensure that a single document cannot use excessive amount of RAM or, during transmission, excessive amount of bandwidth. To store documents larger than the maximum size, MongoDB provides the GridFS API. See and the documentation for your driver for more information about GridFS. Nested Depth for BSON Documents MongoDB supports no more than 100 levels of nesting for . Each object or array adds a level. Use of Case in Database Names Do not rely on case to distinguish between databases. For example, you cannot use two databases with names like,
8 and
9. After you create a database in MongoDB, you must use consistent capitalization when you refer to it. For example, if you create the
8 database, do not refer to it using alternate capitalization such as
1 or
9. Restrictions on Database Names for Windows For MongoDB deployments running on Windows, database names cannot contain any of the following characters: Also database names cannot contain the null character. Restrictions on Database Names for Unix and Linux Systems For MongoDB deployments running on Unix and Linux systems, database names cannot contain any of the following characters: Also database names cannot contain the null character. Length of Database Names Database names cannot be empty and must have fewer than 64 characters. Restriction on Collection Names Collection names should begin with an underscore or a letter character, and cannot:
If your collection name includes special characters, such as the underscore character, or begins with numbers, then to access the collection use the method in or a similar method for your driver. Namespace Length:
The field name
7 is reserved for use as a primary key; its value must be unique in the collection, is immutable, and may be of any type other than an array. If the
7 contains subfields, the subfield names cannot begin with a (
WarningUse caution, the issues discussed in this section could lead to data loss or corruption. The MongoDB Query Language does not support documents with duplicate field names. While some BSON builders may support creating a BSON document with duplicate field names, inserting these documents into MongoDB is not supported even if the insert succeeds, or appears to succeed. For example, inserting a BSON document with duplicate field names through a MongoDB driver may result in the driver silently dropping the duplicate values prior to insertion, or may result in an invalid document being inserted that contains duplicate fields. Querying against any such documents would lead to arbitrary and inconsistent results. Starting in MongoDB 5.0, document field names can be dollar (
9). However, and may not work as expected in some situations with field names that make use of these characters. cannot differentiate between type wrappers and fields that happen to have the same name as type wrappers. Do not use Extended JSON formats in contexts where the corresponding BSON representations might include dollar (
There are also restrictions on using and with periods (
There is a small chance of data loss when using dollar (
When running , , and commands, drivers that are 5.0 compatible remove restrictions on using documents with field names that are dollar (
9). These field names generated a client-side error in earlier driver versions. The restrictions are removed regardless of the server version the driver is connected to. If a 5.0 driver sends a document to an older server, the document will be rejected without sending an error. Namespace Length
TipSee also:Index Key Limit NoteChanged in version 4.2For MongoDB 2.6 through MongoDB versions with fCV set to
2 or earlier, the total size of an index entry, which can include structural overhead depending on the BSON type, must be less than 1024 bytes. When the applies:
A single collection can have no more than 64 indexes. Index Name Length NoteChanged in version 4.2In previous versions of MongoDB or MongoDB versions with fCV set to
2 or earlier, fully qualified index names, which include the namespace and the dot separators (i.e.
8), cannot be longer than 127 bytes. By default,
9 is the concatenation of the field names and index type. You can explicitly specify the
9 to the method to ensure that the fully qualified index name does not exceed the limit. Number of Indexed Fields in a Compound Index There can be no more than 32 fields in a compound index. Queries cannot use both text and Geospatial Indexes You cannot combine the query, which requires a special , with a query operator that requires a different type of special index. For example you cannot combine query with the operator. Fields with 2dsphere Indexes can only hold Geometries Fields with indexes must hold geometry data in the form of or data. If you attempt to insert a document with non-geometry data in a
5 indexed field, or build a
5 index on a collection where the indexed field has non-geometry data, the operation will fail. TipSee also:Limited Number of 2dsphere index keys To generate keys for a 2dsphere index, maps to an internal representation. The resulting internal representation may be a large array of values. When generates index keys on a field that holds an array, generates an index key for each array element. For compound indexes, calculates the of the sets of keys that are generated for each field. If both sets are large, then calculating the cartesian product could cause the operation to exceed memory limits. limits the maximum number of keys generated for a single document to prevent out of memory errors. The default is 100000 index keys per document. It is possible to raise the limit, but if an operation requires more keys than the parameter specifies, the operation will fail. NaN values returned from Covered Queries by the WiredTiger Storage Engine are always of type double If the value of a field returned from a query that is is
3, the type of that
3 value is always
5. Multikey Index cannot cover queries over array fields. Geospatial Index Geospatial indexes can't Memory Usage in Index Builds supports building one or more indexes on a collection. uses a combination of memory and temporary files on disk to complete index builds. The default limit on memory usage for is 200 megabytes (for versions 4.2.3 and later) and 500 (for versions 4.2.2 and earlier), shared between all indexes built using a single command. Once the memory limit is reached, uses temporary disk files in a subdirectory named `M10`1 within the directory to complete the build. You can override the memory limit by setting the server parameter. Setting a higher memory limit may result in faster completion of index builds. However, setting this limit too high relative to the unused RAM on your system can result in memory exhaustion and server shutdown.
Index builds may be initiated either by a user command such as or by an administrative process such as an . Both are subject to the limit set by An populates only one collection at a time and has no risk of exceeding the memory limit. However, it is possible for a user to start index builds on multiple collections in multiple databases simultaneously and potentially consume an amount of memory greater than the limit set by TipTo minimize the impact of building an index on replica sets and sharded clusters with replica set shards, use a rolling index build procedure as described on Collation and Index Types The following index types only support simple binary comparison and do not support
TipTo create a `M10`8 or `M10`9 index on a collection that has a non-simple collation, you must explicitly specify
00 when creating the index. Hidden Indexes
Maximum Number of Sort Keys You can sort on a maximum of 32 keys. Maximum Number of Documents in a Capped Collection If you specify the maximum number of documents in a capped collection with 's
04 parameter, the value must be less than 2 31 documents. If you do not specify a maximum number of documents when creating a capped collection, there is no limit on the number of documents. Number of Members of a Replica Set Replica sets can have up to 50 members. Number of Voting Members of a Replica Set Replica sets can have up to 7 voting members. For replica sets with more than 7 total members, see Maximum Size of Auto-Created Oplog If you do not explicitly specify an oplog size (i.e. with or ) MongoDB will create an oplog that is no larger than 50 gigabytes. Sharded clusters have the restrictions and thresholds described here. Operations Unavailable in Sharded Environments does not permit references to the
08 object from the function. This is uncommon in un-sharded collections. The command is not supported in sharded environments. In MongoDB 5.0 and earlier, you cannot specify in the
11 parameter of stages. Covered Queries in Sharded Clusters When run on , indexes can only queries on collections if the index contains the shard key. Single Document Modification Operations in Sharded Collections To use and operations for a sharded collection that specify the
16 or
17 option:
MongoDB does not support unique indexes across shards, except when the unique index contains the full shard key as a prefix of the index. In these situations MongoDB will enforce uniqueness across the full key, not a single field. TipSee:Maximum Number of Documents Per Range to Migrate By default, MongoDB cannot move a range if the number of documents in the range is greater than 2 times the result of dividing the configured by the average document size. If MongoDB can move a sub-range of a chunk and reduce the size to less than that, the balancer does so by migrating a range. includes the
20 field, which represents the average document size in the collection. For chunks that are
Shard Key Size Starting in version 4.4, MongoDB removes the limit on the shard key size. For MongoDB 4.2 and earlier, a shard key cannot exceed 512 bytes. Shard Key Index Type A index can be an ascending index on the shard key, a compound index that starts with the shard key and specifies ascending order for the shard key, or a A index cannot be:
Your options for changing a shard key depend on the version of MongoDB that you are running:
In MongoDB 4.2 and earlier, to change a shard key:
For clusters with high insert volumes, a shard key with monotonically increasing and decreasing keys can affect insert throughput. If your shard key is the
7 field, be aware that the default values of the
7 fields are which have generally increasing values. When inserting documents with monotonically increasing shard keys, all inserts belong to the same on a single . The system eventually divides the chunk range that receives all write operations and migrates its contents to distribute data more evenly. However, at any moment the cluster directs insert operations only to a single shard, which creates an insert throughput bottleneck. If the operations on the cluster are predominately read operations and updates, this limitation may not affect the cluster. To avoid this constraint, use a or select a field that does not increase or decrease monotonically. and store hashes of keys with ascending values. Sort Operations If MongoDB cannot use an index or indexes to obtain the sort order, MongoDB must perform a blocking sort operation on the data. The name refers to the requirement that the
26 stage reads all input documents before returning any output documents, blocking the flow of data for that specific query. If MongoDB requires using more than 100 megabytes of system memory for the blocking sort operation, MongoDB returns an error unless the query specifies (New in MongoDB 4.4). allows MongoDB to use temporary files on disk to store data exceeding the 100 megabyte system memory limit while processing a blocking sort operation. Changed in version 4.4: For MongoDB 4.2 and prior, blocking sort operations could not exceed 32 megabytes of system memory. For more information on sorts and index use, see Aggregation Pipeline Operation Starting in MongoDB 6.0, the parameter controls whether pipeline stages that require more than 100 megabytes of memory to execute write temporary files to disk by default.
The aggregation stage is not restricted to 100 megabytes of RAM because it runs in a separate process. Examples of stages that can write temporary files to disk when is
31 are:
NotePipeline stages operate on streams of documents with each pipeline stage taking in documents, processing them, and then outputing the resulting documents. Some stages can't output any documents until they have processed all incoming documents. These pipeline stages must keep their stage output in RAM until all incoming documents are processed. As a result, these pipeline stages may require more space than the 100 MB limit. If the results of one of your pipeline stages exceed the limit, consider Starting in MongoDB 4.2, the profiler log messages and diagnostic log messages includes a
49 indicator if any aggregation stage wrote data to temporary files due to Aggregation and Read Concern
TipGeospatial Queries Using a `M10`9 index for queries on spherical data can return incorrect results or an error. For example,`M10`9 indexes don't support spherical queries that wrap around the poles. Geospatial Coordinates
For or , if you specify a single-ringed polygon that has an area greater than a single hemisphere, include the custom MongoDB coordinate reference system in the expression. Otherwise, or queries for the complementary geometry. For all other GeoJSON polygons with areas greater than a hemisphere, or queries for the complementary geometry. Multi-document Transactions For
Changed in version 4.4. The following operations are not allowed in transactions:
Transactions have a lifetime limit as specified by . The default is 60 seconds. Write Command Batch Limit Size
92 are allowed in a single batch operation, defined by a single request to the server. Changed in version 3.6: The limit raises from
93 to
92 writes. This limit also applies to legacy
95 messages. The operations in and comparable methods in the drivers do not have this limit. Views A view definition
98 cannot include the or the stage. This restriction also applies to embedded pipelines, such as pipelines used in or stages. Views have the following operation restrictions:
New in version 4.4:
3-Prefixed Field Path RestrictionStarting in MongoDB 4.4, the and projection cannot project a field that starts with
3 with the exception of the For example, starting in MongoDB 4.4, the following operation is invalid:
In earlier version, MongoDB ignores the
3-prefixed field projections.
3 Positional Operator Placement RestrictionStarting in MongoDB 4.4, the projection operator can only appear at the end of the field path, for example
15 or
16.For example, starting in MongoDB 4.4, the following operation is invalid:
To resolve, remove the component of the field path that follows the projection operator.In previous versions, MongoDB ignores the part of the path that follows the
3; i.e. the projection is treated as
19.Empty Field Name Projection RestrictionStarting in MongoDB 4.4, and projection cannot include a projection of an empty field name.For example, starting in MongoDB 4.4, the following operation is invalid:
In previous versions, MongoDB treats the inclusion/exclusion of the empty field as it would the projection of non-existing fields.Path Collision: Embedded Documents and Its FieldsStarting in MongoDB 4.4, it is illegal to project an embedded document with any of the embedded document's fields.For example, consider a collection
22 with documents that contain a
23 field:
Starting in MongoDB 4.4, the following operation fails with a
24 error because it attempts to project both
23 document and the
26 field:
In previous versions, lattermost projection between the embedded documents and its fields determines the projection:
06 of an Array and Embedded FieldsStarting in MongoDB 4.4, and projection cannot contain both a of an array and a field embedded in the array.For example, consider a collection
22 that contains an array field
36:
Starting in MongoDB 4.4, the following operation fails with a
24 error:
In previous versions, the projection applies both projections and returns the first element (
36 array but suppresses the
40 field in the projected element. Starting in MongoDB 4.4, to achieve the same result, use the method with two separate stages.
3 Positional Operator and
06 RestrictionStarting in MongoDB 4.4, and projection cannot include projection expression as part of a projection expression.For example, starting in MongoDB 4.4, the following operation is invalid:
In previous versions, MongoDB returns the first element (
36 array that matches the query condition; i.e. the positional projection
19 takes precedence and the
52 is a no-op. The
53 does not exclude any other document field. Sessions and $external Username Limit To use with
54 authentication users (Kerberos, LDAP, or x.509 users), usernames cannot be greater than 10k bytes. Session Idle Timeout Sessions that receive no read or write operations for 30 minutes or that are not refreshed using within this threshold are marked as expired and can be closed by the MongoDB server at any time. Closing a session kills any in-progress operations and open cursors associated with the session. This includes cursors configured with or a greater than 30 minutes. Consider an application that issues a . The server returns a cursor along with a batch of documents defined by the of the . The session refreshes each time the application requests a new batch of documents from the server. However, if the application takes longer than 30 minutes to process the current batch of documents, the session is marked as expired and closed. When the application requests the next batch of documents, the server returns an error as the cursor was killed when the session was closed. For operations that return a cursor, if the cursor may be idle for longer than 30 minutes, issue the operation within an explicit session using and periodically refresh the session using the command. For example:
In the example operation, the method is associated with an explicit session. The cursor is configured with to prevent the server from closing the cursor if idle. The
65 loop includes a block that uses to refresh the session every 5 minutes. Since the session will never exceed the 30 minute idle timeout, the cursor can remain open indefinitely. |