Hướng dẫn cài java win 10 Informational, Transactional năm 2024

This document provides a collection of hard and soft limitations of the MongoDB system. The limitations on this page apply to deployments hosted in all of the following environments unless specified otherwise:

  • MongoDB Atlas: The fully managed service for MongoDB deployments in the cloud
  • : The subscription-based, self-managed version of MongoDB
  • : The source-available, free-to-use, and self-managed version of MongoDB

The following limitations apply only to deployments hosted in MongoDB Atlas. If any of these limits present a problem for your organization, contact Atlas support.

MongoDB Atlas limits concurrent incoming connections based on the cluster tier and . MongoDB Atlas connection limits apply per node. For sharded clusters, MongoDB Atlas connection limits apply per router. The number of routers is equal to the number of replica set nodes across all shards.

Your read preference also contributes to the total number of connections that MongoDB Atlas can allocate for a given query.

MongoDB Atlas has the following connection limits for the specified cluster tiers:

Note

MongoDB Atlas reserves a small number of connections to each cluster for supporting MongoDB Atlas services.

If you're connecting to a multi-cloud MongoDB Atlas deployment through a , you can access only the nodes in the same cloud provider that you're connecting from. This cloud provider might not have the node in its region. When this happens, you must specify the mode in the connection string to access the deployment.

If you need access to all nodes for your multi-cloud MongoDB Atlas deployment from your current provider through a private connection, you must perform one of the following actions:

  • Configure a VPN in the current provider to each of the remaining providers.
  • Configure a to MongoDB Atlas for each of the remaining providers.

While there is no hard limit on the number of collections in a single MongoDB Atlas cluster, the performance of a cluster might degrade if it serves a large number of collections and indexes. Larger collections have a greater impact on performance.

The recommended maximum combined number of collections and indexes by MongoDB Atlas cluster tier are as follows:

MongoDB Atlas Cluster Tier

Recommended Maximum

M10

5,000 collections and indexes


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

0 /


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

1

10,000 collections and indexes


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

2/+

100,000 collections and indexes

MongoDB Atlas deployments have the following organization and project limits:

Component

Limit

Database users per MongoDB Atlas project

100

Atlas users per MongoDB Atlas project

500

Atlas users per MongoDB Atlas organization

500

API Keys per MongoDB Atlas organization

500

Access list entries per MongoDB Atlas Project

200

Users per MongoDB Atlas team

250

Teams per MongoDB Atlas project

100

Teams per MongoDB Atlas organization

250

Teams per MongoDB Atlas user

100

Organizations per MongoDB Atlas user

250

per MongoDB Atlas user

50

Clusters per MongoDB Atlas project

25

Projects per MongoDB Atlas organization

250

Custom MongoDB roles per MongoDB Atlas project

100

Assigned roles per database user

100

Hourly billing per MongoDB Atlas organization

$50

per MongoDB Atlas project

25

Total Network Peering Connections per MongoDB Atlas project

50. Additionally, MongoDB Atlas limits the number of nodes per based on the CIDR block and theregionselected for the project.

Pending network peering connections per MongoDB Atlas project

25

addressable targets per region

50

addressable targets per region

150

Unique shard keys per MongoDB Atlas project

40

Atlas Data Lakepipelines per MongoDB Atlas project

25


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

3 clusters per MongoDB Atlas project

1

MongoDB Atlas limits the length and enforces ReGex requirements for the following component labels:

Component

Character Limit

RegEx Pattern

Cluster Name

64


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

4

Project Name

64


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

5

Organization Name

64


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

5

API Key Description

250

Additional limitations apply to MongoDB Atlas serverless instances, free clusters, and shared clusters. To learn more, see the following resources:

  • Serverless Instance Limitations
  • Atlas M0 (Free Cluster), M2, and M5 Limitations

Some MongoDB commands are unsupported in MongoDB Atlas. Additionally, some commands are supported only in MongoDB Atlas free clusters. To learn more, see the following resources:

  • Unsupported Commands in Atlas
  • Commands Available Only in Free Clusters

BSON Document Size

The maximum BSON document size is 16 megabytes.

The maximum document size helps ensure that a single document cannot use excessive amount of RAM or, during transmission, excessive amount of bandwidth. To store documents larger than the maximum size, MongoDB provides the GridFS API. See and the documentation for your driver for more information about GridFS.

Nested Depth for BSON Documents

MongoDB supports no more than 100 levels of nesting for . Each object or array adds a level.

Use of Case in Database Names

Do not rely on case to distinguish between databases. For example, you cannot use two databases with names like,


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

8 and


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

9.

After you create a database in MongoDB, you must use consistent capitalization when you refer to it. For example, if you create the


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

8 database, do not refer to it using alternate capitalization such as


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

1 or


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

9.

Restrictions on Database Names for Windows

For MongoDB deployments running on Windows, database names cannot contain any of the following characters:

Also database names cannot contain the null character.

Restrictions on Database Names for Unix and Linux Systems

For MongoDB deployments running on Unix and Linux systems, database names cannot contain any of the following characters:

Also database names cannot contain the null character.

Length of Database Names

Database names cannot be empty and must have fewer than 64 characters.

Restriction on Collection Names

Collection names should begin with an underscore or a letter character, and cannot:

  • contain the

    db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

    3.
  • be an empty string (e.g.

    db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

    4).
  • contain the null character.
  • begin with the

    db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

    5 prefix. (Reserved for internal use.)

If your collection name includes special characters, such as the underscore character, or begins with numbers, then to access the collection use the method in or a similar method for your driver.

Namespace Length:

  • For set to

    db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

    8 or greater, MongoDB raises the limit for unsharded collections and views to 255 bytes, and to 235 bytes for sharded collections. For a collection or a view, the namespace includes the database name, the dot (

    db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

  • separator, and the collection/view name (e.g.

    { ..., size: { h: 10, w: 15.25, uom: "cm" }, ... }

    0),
  • For set to

    { ..., size: { h: 10, w: 15.25, uom: "cm" }, ... }

    1 or earlier, the maximum length of unsharded collections and views namespace remains 120 bytes and 100 bytes for sharded collection. Restrictions on Field Names
  • Field names cannot contain the

    { ..., size: { h: 10, w: 15.25, uom: "cm" }, ... }

    2 character.
  • The server permits storage of field names that contain dots (

    db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

  • and dollar signs (

    db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

    3).
  • MongodB 5.0 adds improved support for the use of (

    db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

  • and (

    db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

  • in field names. There are some restrictions. See for more details. Restrictions on _id

The field name


{ ..., size: { h: 10, w: 15.25, uom: "cm" }, ... }

7 is reserved for use as a primary key; its value must be unique in the collection, is immutable, and may be of any type other than an array. If the


{ ..., size: { h: 10, w: 15.25, uom: "cm" }, ... }

7 contains subfields, the subfield names cannot begin with a (


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

  1. symbol.

Warning

Use caution, the issues discussed in this section could lead to data loss or corruption.

The MongoDB Query Language does not support documents with duplicate field names. While some BSON builders may support creating a BSON document with duplicate field names, inserting these documents into MongoDB is not supported even if the insert succeeds, or appears to succeed. For example, inserting a BSON document with duplicate field names through a MongoDB driver may result in the driver silently dropping the duplicate values prior to insertion, or may result in an invalid document being inserted that contains duplicate fields. Querying against any such documents would lead to arbitrary and inconsistent results.

Starting in MongoDB 5.0, document field names can be dollar (


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

  1. prefixed and can contain periods (


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

9). However, and may not work as expected in some situations with field names that make use of these characters.

cannot differentiate between type wrappers and fields that happen to have the same name as type wrappers. Do not use Extended JSON formats in contexts where the corresponding BSON representations might include dollar (


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

  1. prefixed keys. The mechanism is an exception to this general rule.

There are also restrictions on using and with periods (


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

  1. in field names. Since CSV files use the period (


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

  1. to represent data hierarchies, a period (


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

  1. in a field name will be misinterpreted as a level of nesting.

There is a small chance of data loss when using dollar (


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

  1. prefixed field names or field names that contain periods (


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

  1. if these field names are used in conjunction with unacknowledged writes (


{ ..., instock: [ { warehouse: "A", qty: 35 }, { warehouse: "B", qty: 15 }, { warehouse: "C", qty: 35 } ], ... }

  1. on servers that are older than MongoDB 5.0.

When running , , and commands, drivers that are 5.0 compatible remove restrictions on using documents with field names that are dollar (


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

  1. prefixed or that contain periods (


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

9). These field names generated a client-side error in earlier driver versions.

The restrictions are removed regardless of the server version the driver is connected to. If a 5.0 driver sends a document to an older server, the document will be rejected without sending an error.

Namespace Length

  • For set to

    db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

    8 or greater, MongoDB raises the limit for unsharded collections and views to 255 bytes, and to 235 bytes for sharded collections. For a collection or a view, the namespace includes the database name, the dot (

    db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

  • separator, and the collection/view name (e.g.

    { ..., size: { h: 10, w: 15.25, uom: "cm" }, ... }

    0),
  • For set to

    { ..., size: { h: 10, w: 15.25, uom: "cm" }, ... }

    1 or earlier, the maximum length of unsharded collections and views namespace remains 120 bytes and 100 bytes for sharded collection.

Tip

See also:

Index Key Limit

Note

Changed in version 4.2

For MongoDB 2.6 through MongoDB versions with fCV set to


db.inventory.find( {}, { "instock": { $slice: 1 }, "instock.warehouse": 0 } ) // Invalid starting in 4.4

2 or earlier, the total size of an index entry, which can include structural overhead depending on the BSON type, must be less than 1024 bytes.

When the applies:

  • MongoDB will not create an index on a collection if the index entry for an existing document exceeds the .
  • Reindexing operations will error if the index entry for an indexed field exceeds the . Reindexing operations occur as part of the command as well as the method. Because these operations drop all the indexes from a collection and then recreate them sequentially, the error from the prevents these operations from rebuilding any remaining indexes for the collection.
  • MongoDB will not insert into an indexed collection any document with an indexed field whose corresponding index entry would exceed the , and instead, will return an error. Previous versions of MongoDB would insert but not index such documents.
  • Updates to the indexed field will error if the updated value causes the index entry to exceed the . If an existing document contains an indexed field whose index entry exceeds the limit, any update that results in the relocation of that document on disk will error.
  • and will not insert documents that contain an indexed field whose corresponding index entry would exceed the .
  • In MongoDB 2.6, secondary members of replica sets will continue to replicate documents with an indexed field whose corresponding index entry exceeds the on initial sync but will print warnings in the logs. Secondary members also allow index build and rebuild operations on a collection that contains an indexed field whose corresponding index entry exceeds the but with warnings in the logs. With mixed version replica sets where the secondaries are version 2.6 and the primary is version 2.4, secondaries will replicate documents inserted or updated on the 2.4 primary, but will print error messages in the log if the documents contain an indexed field whose corresponding index entry exceeds the .
  • For existing sharded collections, will fail if the chunk has a document that contains an indexed field whose index entry exceeds the . Number of Indexes per Collection

A single collection can have no more than 64 indexes.

Index Name Length

Note

Changed in version 4.2

In previous versions of MongoDB or MongoDB versions with fCV set to


db.inventory.find( {}, { "instock": { $slice: 1 }, "instock.warehouse": 0 } ) // Invalid starting in 4.4

2 or earlier, fully qualified index names, which include the namespace and the dot separators (i.e.


db.inventory.find( {}, { "instock": { $slice: 1 }, "instock.warehouse": 0 } ) // Invalid starting in 4.4

8), cannot be longer than 127 bytes.

By default,


db.inventory.find( {}, { "instock": { $slice: 1 }, "instock.warehouse": 0 } ) // Invalid starting in 4.4

9 is the concatenation of the field names and index type. You can explicitly specify the


db.inventory.find( {}, { "instock": { $slice: 1 }, "instock.warehouse": 0 } ) // Invalid starting in 4.4

9 to the method to ensure that the fully qualified index name does not exceed the limit.

Number of Indexed Fields in a Compound Index

There can be no more than 32 fields in a compound index.

Queries cannot use both text and Geospatial Indexes

You cannot combine the query, which requires a special , with a query operator that requires a different type of special index. For example you cannot combine query with the operator.

Fields with 2dsphere Indexes can only hold Geometries

Fields with indexes must hold geometry data in the form of or data. If you attempt to insert a document with non-geometry data in a


db.inventory.find( { "instock.qty": { $gt: 25 } }, { "instock.$": { $slice: 1 } } ) // Invalid starting in 4.4

5 indexed field, or build a


db.inventory.find( { "instock.qty": { $gt: 25 } }, { "instock.$": { $slice: 1 } } ) // Invalid starting in 4.4

5 index on a collection where the indexed field has non-geometry data, the operation will fail.

Tip

See also:

Limited Number of 2dsphere index keys

To generate keys for a 2dsphere index, maps to an internal representation. The resulting internal representation may be a large array of values.

When generates index keys on a field that holds an array, generates an index key for each array element. For compound indexes, calculates the of the sets of keys that are generated for each field. If both sets are large, then calculating the cartesian product could cause the operation to exceed memory limits.

limits the maximum number of keys generated for a single document to prevent out of memory errors. The default is 100000 index keys per document. It is possible to raise the limit, but if an operation requires more keys than the parameter specifies, the operation will fail.

NaN values returned from Covered Queries by the WiredTiger Storage Engine are always of type double

If the value of a field returned from a query that is is


var session = db.getMongo().startSession()

var sessionId = session

sessionId  // show the sessionId

var cursor = session.getDatabase("examples").getCollection("data").find().noCursorTimeout()

var refreshTimestamp = new Date() // take note of time at operation start

while (cursor.hasNext()) {

  // Check if more than 5 minutes have passed since the last refresh

  if ( (new Date()-refreshTimestamp)/1000 > 300 ) {

    print("refreshing session")

    db.adminCommand({"refreshSessions" : [sessionId]})

    refreshTimestamp = new Date()

  }

  // process cursor normally

}

3, the type of that


var session = db.getMongo().startSession()

var sessionId = session

sessionId  // show the sessionId

var cursor = session.getDatabase("examples").getCollection("data").find().noCursorTimeout()

var refreshTimestamp = new Date() // take note of time at operation start

while (cursor.hasNext()) {

  // Check if more than 5 minutes have passed since the last refresh

  if ( (new Date()-refreshTimestamp)/1000 > 300 ) {

    print("refreshing session")

    db.adminCommand({"refreshSessions" : [sessionId]})

    refreshTimestamp = new Date()

  }

  // process cursor normally

}

3 value is always


var session = db.getMongo().startSession()

var sessionId = session

sessionId  // show the sessionId

var cursor = session.getDatabase("examples").getCollection("data").find().noCursorTimeout()

var refreshTimestamp = new Date() // take note of time at operation start

while (cursor.hasNext()) {

  // Check if more than 5 minutes have passed since the last refresh

  if ( (new Date()-refreshTimestamp)/1000 > 300 ) {

    print("refreshing session")

    db.adminCommand({"refreshSessions" : [sessionId]})

    refreshTimestamp = new Date()

  }

  // process cursor normally

}

5.

Multikey Index

cannot cover queries over array fields.

Geospatial Index

Geospatial indexes can't

Memory Usage in Index Builds

supports building one or more indexes on a collection. uses a combination of memory and temporary files on disk to complete index builds. The default limit on memory usage for is 200 megabytes (for versions 4.2.3 and later) and 500 (for versions 4.2.2 and earlier), shared between all indexes built using a single command. Once the memory limit is reached, uses temporary disk files in a subdirectory named `M10`1 within the directory to complete the build.

You can override the memory limit by setting the server parameter. Setting a higher memory limit may result in faster completion of index builds. However, setting this limit too high relative to the unused RAM on your system can result in memory exhaustion and server shutdown.

  • For

    { ..., size: { h: 10, w: 15.25, uom: "cm" }, ... }

    1 and later, the index build memory limit applies to all index builds.

Index builds may be initiated either by a user command such as or by an administrative process such as an . Both are subject to the limit set by

An populates only one collection at a time and has no risk of exceeding the memory limit. However, it is possible for a user to start index builds on multiple collections in multiple databases simultaneously and potentially consume an amount of memory greater than the limit set by

Tip

To minimize the impact of building an index on replica sets and sharded clusters with replica set shards, use a rolling index build procedure as described on

Collation and Index Types

The following index types only support simple binary comparison and do not support

  • indexes
  • indexes

Tip

To create a `M10`8 or `M10`9 index on a collection that has a non-simple collation, you must explicitly specify


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

00 when creating the index.

Hidden Indexes

  • You cannot hide the

    { ..., size: { h: 10, w: 15.25, uom: "cm" }, ... }

    7 index.
  • You cannot use on a hidden index.

Maximum Number of Sort Keys

You can sort on a maximum of 32 keys.

Maximum Number of Documents in a Capped Collection

If you specify the maximum number of documents in a capped collection with 's


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

04 parameter, the value must be less than 2 31 documents.

If you do not specify a maximum number of documents when creating a capped collection, there is no limit on the number of documents.

Number of Members of a Replica Set

Replica sets can have up to 50 members.

Number of Voting Members of a Replica Set

Replica sets can have up to 7 voting members. For replica sets with more than 7 total members, see

Maximum Size of Auto-Created Oplog

If you do not explicitly specify an oplog size (i.e. with or ) MongoDB will create an oplog that is no larger than 50 gigabytes.

Sharded clusters have the restrictions and thresholds described here.

Operations Unavailable in Sharded Environments

does not permit references to the


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

08 object from the function. This is uncommon in un-sharded collections.

The command is not supported in sharded environments.

In MongoDB 5.0 and earlier, you cannot specify in the


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

11 parameter of stages.

Covered Queries in Sharded Clusters

When run on , indexes can only queries on collections if the index contains the shard key.

Single Document Modification Operations in Sharded Collections

To use and operations for a sharded collection that specify the


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

16 or


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

17 option:

  • If you only target one shard, you can use a partial shard key in the query specification or,
  • You can provide the or the

    { ..., size: { h: 10, w: 15.25, uom: "cm" }, ... }

    7 field in the query specification. Unique Indexes in Sharded Collections

MongoDB does not support unique indexes across shards, except when the unique index contains the full shard key as a prefix of the index. In these situations MongoDB will enforce uniqueness across the full key, not a single field.

Tip

See:

Maximum Number of Documents Per Range to Migrate

By default, MongoDB cannot move a range if the number of documents in the range is greater than 2 times the result of dividing the configured by the average document size. If MongoDB can move a sub-range of a chunk and reduce the size to less than that, the balancer does so by migrating a range. includes the


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

20 field, which represents the average document size in the collection.

For chunks that are

  • The balancer setting

    db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

    21 allows the balancer to migrate chunks too large to move as long as the chunks are not labeled . See for details. When issuing and commands, it's possible to specify the option to allow for the migration of ranges that are too large to move. The ranges may or may not be labeled

Shard Key Size

Starting in version 4.4, MongoDB removes the limit on the shard key size.

For MongoDB 4.2 and earlier, a shard key cannot exceed 512 bytes.

Shard Key Index Type

A index can be an ascending index on the shard key, a compound index that starts with the shard key and specifies ascending order for the shard key, or a

A index cannot be:

  • A descending index on the shard key
  • A
  • Any of the following index types: Shard Key Selection is Immutable in MongoDB 4.2 and Earlier

Your options for changing a shard key depend on the version of MongoDB that you are running:

  • Starting in MongoDB 5.0, you can by changing a document's shard key.
  • Starting in MongoDB 4.4, you can by adding a suffix field or fields to the existing shard key.
  • In MongoDB 4.2 and earlier, the choice of shard key cannot be changed after sharding.

In MongoDB 4.2 and earlier, to change a shard key:

  • Dump all data from MongoDB into an external format.
  • Drop the original sharded collection.
  • Configure sharding using the new shard key.
  • the shard key range to ensure initial even distribution.
  • Restore the dumped data into MongoDB. Monotonically Increasing Shard Keys Can Limit Insert Throughput

For clusters with high insert volumes, a shard key with monotonically increasing and decreasing keys can affect insert throughput. If your shard key is the


{ ..., size: { h: 10, w: 15.25, uom: "cm" }, ... }

7 field, be aware that the default values of the


{ ..., size: { h: 10, w: 15.25, uom: "cm" }, ... }

7 fields are which have generally increasing values.

When inserting documents with monotonically increasing shard keys, all inserts belong to the same on a single . The system eventually divides the chunk range that receives all write operations and migrates its contents to distribute data more evenly. However, at any moment the cluster directs insert operations only to a single shard, which creates an insert throughput bottleneck.

If the operations on the cluster are predominately read operations and updates, this limitation may not affect the cluster.

To avoid this constraint, use a or select a field that does not increase or decrease monotonically.

and store hashes of keys with ascending values.

Sort Operations

If MongoDB cannot use an index or indexes to obtain the sort order, MongoDB must perform a blocking sort operation on the data. The name refers to the requirement that the


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

26 stage reads all input documents before returning any output documents, blocking the flow of data for that specific query.

If MongoDB requires using more than 100 megabytes of system memory for the blocking sort operation, MongoDB returns an error unless the query specifies (New in MongoDB 4.4). allows MongoDB to use temporary files on disk to store data exceeding the 100 megabyte system memory limit while processing a blocking sort operation.

Changed in version 4.4: For MongoDB 4.2 and prior, blocking sort operations could not exceed 32 megabytes of system memory.

For more information on sorts and index use, see

Aggregation Pipeline Operation

Starting in MongoDB 6.0, the parameter controls whether pipeline stages that require more than 100 megabytes of memory to execute write temporary files to disk by default.

  • If is set to

    db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

    31, pipeline stages that require more than 100 megabytes of memory to execute write temporary files to disk by default. You can disable writing temporary files to disk for specific

    db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

    32 or

    db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

    33 commands using the

    db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

    34 option.
  • If is set to

    db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

    36, pipeline stages that require more than 100 megabytes of memory to execute raise an error by default. You can enable writing temporary files to disk for specific

    db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

    32 or

    db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

    33 using the

    db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

    39 option.

The aggregation stage is not restricted to 100 megabytes of RAM because it runs in a separate process.

Examples of stages that can write temporary files to disk when is


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

31 are:

  • when the sort operation is not supported by an index

Note

Pipeline stages operate on streams of documents with each pipeline stage taking in documents, processing them, and then outputing the resulting documents.

Some stages can't output any documents until they have processed all incoming documents. These pipeline stages must keep their stage output in RAM until all incoming documents are processed. As a result, these pipeline stages may require more space than the 100 MB limit.

If the results of one of your pipeline stages exceed the limit, consider

Starting in MongoDB 4.2, the profiler log messages and diagnostic log messages includes a


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

49 indicator if any aggregation stage wrote data to temporary files due to

Aggregation and Read Concern

  • Starting in MongoDB 4.2, the stage cannot be used in conjunction with read concern . That is, if you specify read concern for , you cannot include the stage in the pipeline.
  • The stage cannot be used in conjunction with read concern . That is, if you specify read concern for , you cannot include the stage in the pipeline. 2d Geospatial queries cannot use the $or operator

Tip

Geospatial Queries

Using a `M10`9 index for queries on spherical data can return incorrect results or an error. For example,`M10`9 indexes don't support spherical queries that wrap around the poles.

Geospatial Coordinates

  • Valid longitude values are between

    db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

    62 and

    db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

    63, both inclusive.
  • Valid latitude values are between

    db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

    64 and

    db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

    65, both inclusive. Area of GeoJSON Polygons

For or , if you specify a single-ringed polygon that has an area greater than a single hemisphere, include the custom MongoDB coordinate reference system in the expression. Otherwise, or queries for the complementary geometry. For all other GeoJSON polygons with areas greater than a hemisphere, or queries for the complementary geometry.

Multi-document Transactions

For

  • You can create collections and indexes in transactions. For details, see
  • The collections used in a transaction can be in different databases.

    Note

    You cannot create new collections in cross-shard write transactions. For example, if you write to an existing collection in one shard and implicitly create a collection in a different shard, MongoDB cannot perform both operations in the same transaction.
  • You cannot write to collections.
  • You cannot use read concern when reading from a collection. (Starting in MongoDB 5.0)
  • You cannot read/write to collections in the

    db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

    74,

    db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

    75, or

    db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

    76 databases.
  • You cannot write to

    db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

    77 collections.
  • You cannot return the supported operation's query plan using

    db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

    78 or similar commands.
  • For cursors created outside of a transaction, you cannot call inside the transaction.
  • For cursors created in a transaction, you cannot call outside the transaction.
  • You cannot specify as the first operation in a

Changed in version 4.4.

The following operations are not allowed in transactions:

  • Creating new collections in cross-shard write transactions. For example, if you write to an existing collection in one shard and implicitly create a collection in a different shard, MongoDB cannot perform both operations in the same transaction.
  • , e.g. method, and indexes, e.g. and methods, when using a read concern level other than
  • The and commands and their helper methods.
  • Other non-CRUD and non-informational operations, such as , , , etc. and their helpers.

Transactions have a lifetime limit as specified by . The default is 60 seconds.

Write Command Batch Limit Size


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

92 are allowed in a single batch operation, defined by a single request to the server.

Changed in version 3.6: The limit raises from


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

93 to


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

92 writes. This limit also applies to legacy


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

95 messages.

The operations in and comparable methods in the drivers do not have this limit.

Views

A view definition


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

98 cannot include the or the stage. This restriction also applies to embedded pipelines, such as pipelines used in or stages.

Views have the following operation restrictions:

  • Views are read-only.
  • You cannot rename
  • operations on views do not support the following operators:
  • do not support text search.
  • do not support map-reduce operations. Projection Restrictions

New in version 4.4:


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

3-Prefixed Field Path RestrictionStarting in MongoDB 4.4, the and projection cannot project a field that starts with


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

3 with the exception of the For example, starting in MongoDB 4.4, the following operation is invalid:


db.inventory.find( {}, { "$instock.warehouse": 0, "$item": 0, "detail.$price": 1 } ) // Invalid starting in 4.4

In earlier version, MongoDB ignores the


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

3-prefixed field projections.


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

3 Positional Operator Placement RestrictionStarting in MongoDB 4.4, the projection operator can only appear at the end of the field path, for example


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

15 or


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

16.For example, starting in MongoDB 4.4, the following operation is invalid:


db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4

To resolve, remove the component of the field path that follows the projection operator.In previous versions, MongoDB ignores the part of the path that follows the


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

3; i.e. the projection is treated as


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

19.Empty Field Name Projection RestrictionStarting in MongoDB 4.4, and projection cannot include a projection of an empty field name.For example, starting in MongoDB 4.4, the following operation is invalid:


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

In previous versions, MongoDB treats the inclusion/exclusion of the empty field as it would the projection of non-existing fields.Path Collision: Embedded Documents and Its FieldsStarting in MongoDB 4.4, it is illegal to project an embedded document with any of the embedded document's fields.For example, consider a collection


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

22 with documents that contain a


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

23 field:


{ ..., size: { h: 10, w: 15.25, uom: "cm" }, ... }

Starting in MongoDB 4.4, the following operation fails with a


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

24 error because it attempts to project both


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

23 document and the


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

26 field:


db.inventory.find( {}, { size: 1, "size.uom": 1 } )  // Invalid starting in 4.4

In previous versions, lattermost projection between the embedded documents and its fields determines the projection:

  • If the projection of the embedded document comes after any and all projections of its fields, MongoDB projects the embedded document. For example, the projection document

    db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

    27 produces the same result as the projection document

    db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

    28.
  • If the projection of the embedded document comes before the projection any of its fields, MongoDB projects the specified field or fields. For example, the projection document

    db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

    29 produces the same result as the projection document

    db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

    30. Path Collision:


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

06 of an Array and Embedded FieldsStarting in MongoDB 4.4, and projection cannot contain both a of an array and a field embedded in the array.For example, consider a collection


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

22 that contains an array field


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

36:


{ ..., instock: [ { warehouse: "A", qty: 35 }, { warehouse: "B", qty: 15 }, { warehouse: "C", qty: 35 } ], ... }

Starting in MongoDB 4.4, the following operation fails with a


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

24 error:


db.inventory.find( {}, { "instock": { $slice: 1 }, "instock.warehouse": 0 } ) // Invalid starting in 4.4

In previous versions, the projection applies both projections and returns the first element (


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

  1. in the


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

36 array but suppresses the


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

40 field in the projected element. Starting in MongoDB 4.4, to achieve the same result, use the method with two separate stages.


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

3 Positional Operator and


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

06 RestrictionStarting in MongoDB 4.4, and projection cannot include projection expression as part of a projection expression.For example, starting in MongoDB 4.4, the following operation is invalid:


db.inventory.find( { "instock.qty": { $gt: 25 } }, { "instock.$": { $slice: 1 } } ) // Invalid starting in 4.4

In previous versions, MongoDB returns the first element (


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

  1. in the


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

36 array that matches the query condition; i.e. the positional projection


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

19 takes precedence and the


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

52 is a no-op. The


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

53 does not exclude any other document field.

Sessions and $external Username Limit

To use with


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

54 authentication users (Kerberos, LDAP, or x.509 users), usernames cannot be greater than 10k bytes.

Session Idle Timeout

Sessions that receive no read or write operations for 30 minutes or that are not refreshed using within this threshold are marked as expired and can be closed by the MongoDB server at any time. Closing a session kills any in-progress operations and open cursors associated with the session. This includes cursors configured with or a greater than 30 minutes.

Consider an application that issues a . The server returns a cursor along with a batch of documents defined by the of the . The session refreshes each time the application requests a new batch of documents from the server. However, if the application takes longer than 30 minutes to process the current batch of documents, the session is marked as expired and closed. When the application requests the next batch of documents, the server returns an error as the cursor was killed when the session was closed.

For operations that return a cursor, if the cursor may be idle for longer than 30 minutes, issue the operation within an explicit session using and periodically refresh the session using the command. For example:


var session = db.getMongo().startSession()

var sessionId = session

sessionId  // show the sessionId

var cursor = session.getDatabase("examples").getCollection("data").find().noCursorTimeout()

var refreshTimestamp = new Date() // take note of time at operation start

while (cursor.hasNext()) {

  // Check if more than 5 minutes have passed since the last refresh

  if ( (new Date()-refreshTimestamp)/1000 > 300 ) {

    print("refreshing session")

    db.adminCommand({"refreshSessions" : [sessionId]})

    refreshTimestamp = new Date()

  }

  // process cursor normally

}

In the example operation, the method is associated with an explicit session. The cursor is configured with to prevent the server from closing the cursor if idle. The


db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4

65 loop includes a block that uses to refresh the session every 5 minutes. Since the session will never exceed the 30 minute idle timeout, the cursor can remain open indefinitely.