Skip to content

Commit 7ab9ace

Browse files
authored
DOCSP-31213 - streaming config (#183)
1 parent eb8f1f0 commit 7ab9ace

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

48 files changed

+2004
-1524
lines changed

config/redirects

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,4 +61,9 @@ raw: ${prefix}/sparkR -> ${base}/v3.0/r-api/
6161
[*-v3.0]: ${prefix}/${version}/configuration/read -> ${base}/${version}/
6262
[*-v3.0]: ${prefix}/${version}/write-to-mongodb -> ${base}/${version}/
6363
[*-v3.0]: ${prefix}/${version}/read-from-mongodb -> ${base}/${version}/
64-
[*-v3.0]: ${prefix}/${version}/structured-streaming -> ${base}/${version}/
64+
[*-v3.0]: ${prefix}/${version}/structured-streaming -> ${base}/${version}/
65+
[v10.0-*]: ${prefix}/${version}/configuration/write -> ${base}/${version}/batch-mode/batch-write-config/
66+
[v10.0-*]: ${prefix}/${version}/configuration/read -> ${base}/${version}/batch-mode/batch-read-config/
67+
[v10.0-*]: ${prefix}/${version}/write-to-mongodb -> ${base}/${version}/batch-mode/batch-write/
68+
[v10.0-*]: ${prefix}/${version}/read-from-mongodb -> ${base}/${version}/batch-mode/batch-read/
69+
[v10.0-*]: ${prefix}/${version}/structured-streaming -> ${base}/${version}/streaming-mode/

snooty.toml

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,15 @@ title = "Spark Connector"
33

44
intersphinx = ["https://www.mongodb.com/docs/manual/objects.inv"]
55

6-
toc_landing_pages = ["configuration"]
6+
toc_landing_pages = [
7+
"configuration",
8+
"/batch-mode",
9+
"/streaming-mode",
10+
"/streaming-mode/streaming-read",
11+
"/streaming-mode/streaming-write",
12+
"/batch-mode/batch-write",
13+
"/batch-mode/batch-read",
14+
]
715

816
[constants]
917
connector-short = "Spark Connector"

source/batch-mode.txt

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
==========
2+
Batch Mode
3+
==========
4+
5+
.. contents:: On this page
6+
:local:
7+
:backlinks: none
8+
:depth: 1
9+
:class: singlecol
10+
11+
.. toctree::
12+
13+
/batch-mode/batch-read
14+
/batch-mode/batch-write
15+
16+
Overview
17+
--------
18+
19+
In batch mode, you can use the Spark Dataset and DataFrame APIs to process data at
20+
a specified time interval.
21+
22+
The following sections show you how to use the {+connector-short+} to read data from
23+
MongoDB and write data to MongoDB in batch mode:
24+
25+
- :ref:`batch-read-from-mongodb`
26+
- :ref:`batch-write-to-mongodb`
27+
28+
.. tip:: Apache Spark Documentation
29+
30+
To learn more about using Spark to process batches of data, see the
31+
`Spark Programming Guide
32+
<https://spark.apache.org/docs/latest/sql-programming-guide.html>`__.

0 commit comments

Comments
 (0)