Skip to content

Commit 17a463e

Browse files
committed
Update markdown for docusaurus v3.
Signed-off-by: Gerd Zellweger <mail@gerdzellweger.com>
1 parent a2fb539 commit 17a463e

38 files changed

Lines changed: 121 additions & 143 deletions

docs/concepts.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ user would find most convenient.
6565
Feldera is the pioneering implementation of a new theory that unifies
6666
databases, streaming computation, and incremental view maintenance,
6767
written by the inventors of that theory. See our
68-
[publications](/docs/papers) for all the details.
68+
[publications](/papers) for all the details.
6969

7070
Feldera code is available on [Github][Feldera] using an MIT
7171
open-source license. It consists of a Rust runtime and a SQL compiler.

docs/connectors/index.mdx

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -60,26 +60,26 @@ appear before the `AS` clause to resolve any parsing ambiguities.
6060
A connector specification consists of three parts:
6161

6262
* [Generic attributes](#generic-attributes) common to all connectors, such as backpressure thresholds.
63-
* Transport specification (`transport`) for either [input](/docs/connectors/sources/) or [output](/docs/connectors/sinks/)
63+
* Transport specification (`transport`) for either [input](/connectors/sources/) or [output](/connectors/sinks/)
6464
defines the data transport to be used by the connector.
65-
Example transports include [Kafka](/docs/connectors/sources/kafka),
66-
[URL](/docs/connectors/sources/http-get), [Delta Lake](/docs/connectors/sources/delta), etc.
67-
* [Data format specification](/docs/formats) (`format`), which defines the data format for the connector.
68-
Example data formats include [CSV](/docs/formats/csv), [JSON](/docs/formats/json), [Parquet](/docs/formats/parquet), or
65+
Example transports include [Kafka](/connectors/sources/kafka),
66+
[URL](/connectors/sources/http-get), [Delta Lake](/connectors/sources/delta), etc.
67+
* [Data format specification](/formats) (`format`), which defines the data format for the connector.
68+
Example data formats include [CSV](/formats/csv), [JSON](/formats/json), [Parquet](/formats/parquet), or
6969
Avro).
7070

7171
:::note
7272

73-
Some transports, e.g., [Delta Lake](/docs/connectors/sinks/delta) and
74-
[datagen](/docs/connectors/sources/datagen), use fixed predefined data formats and do not require the
73+
Some transports, e.g., [Delta Lake](/connectors/sinks/delta) and
74+
[datagen](/connectors/sources/datagen), use fixed predefined data formats and do not require the
7575
format section in the connector specification.
7676

7777
:::
7878

7979
This architecture allows the user to combine different transports and data formats.
8080

81-
These basics apply to all connectors **except** the HTTP [input](/docs/connectors/sources/http) and
82-
[output](/docs/connectors/sinks/http) connectors which are not managed by the user, as they directly feed/fetch data
81+
These basics apply to all connectors **except** the HTTP [input](/connectors/sources/http) and
82+
[output](/connectors/sinks/http) connectors which are not managed by the user, as they directly feed/fetch data
8383
into/from a pipeline via dedicated pipeline endpoints and therefore do not need to be configured in the `WITH` clauses
8484
of tables and views.
8585

@@ -98,8 +98,8 @@ The following attributes are common to all connectors:
9898
By default a Feldera pipeline sends a batch of changes to the output transport
9999
for each batch of input updates it processes. This can result in a stream of
100100
small updates, which is normal and even preferable for output transports like
101-
[Kafka](/docs/connectors/sinks/kafka); however it can cause performance problems
102-
for other connectors, such as the [Delta Lake connector](/docs/connectors/sinks/delta)
101+
[Kafka](/connectors/sinks/kafka); however it can cause performance problems
102+
for other connectors, such as the [Delta Lake connector](/connectors/sinks/delta)
103103
by creating a large number of small files.
104104

105105
The output buffer mechanism is designed to solve this problem by decoupling the
@@ -147,14 +147,14 @@ specified.
147147

148148
:::
149149

150-
See [Delta Lake output connector documentation](/docs/connectors/sinks/delta#curl)
150+
See [Delta Lake output connector documentation](/connectors/sinks/delta)
151151
for an example of configuring the output buffer.
152152

153153
## Additional resources
154154

155155
For more information, see:
156156

157-
* [Tutorial on using input and output connectors](/docs/tutorials/basics/part3)
158-
* [Tutorial on using HTTP-based Input and Output](/docs/tutorials/basics/part2)
159-
* [Supported source transports](/docs/connectors/sources)
160-
* [Supported sinks transports](/docs/connectors/sinks)
157+
* [Tutorial on using input and output connectors](/tutorials/basics/part3)
158+
* [Tutorial on using HTTP-based Input and Output](/tutorials/basics/part2)
159+
* [Supported source transports](/connectors/sources)
160+
* [Supported sinks transports](/connectors/sinks)

docs/connectors/sinks/confluent-jdbc.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
:::note
44
This page describes configuration options specific to integration with the Confluent JDBC sink connector.
5-
See [top-level connector documentation](/docs/connectors/) for general information
5+
See [top-level connector documentation](/connectors/) for general information
66
about configuring input and output connectors.
77
:::
88

@@ -157,5 +157,5 @@ The user is responsible for selecting a set of columns for the `key_fields` prop
157157
guaranteed to have unique values. Failure to choose a unique key may lead to data loss.
158158
:::
159159

160-
* For more details on Avro support in Feldera, please refer to the [Avro Format Documentation](/docs/formats/avro).
161-
* For more information on configuring Kafka transport, visit the [Kafka Sink Connector Documentation](/docs/connectors/sinks/kafka).
160+
* For more details on Avro support in Feldera, please refer to the [Avro Format Documentation](/formats/avro).
161+
* For more information on configuring Kafka transport, visit the [Kafka Sink Connector Documentation](/connectors/sinks/kafka).

docs/connectors/sinks/delta.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ of input changes. It works by accumulating updates inside the pipeline
4646
for up to a user-defined period of time or until accumulating a user-defined number
4747
of updates and writing them to the Delta Table as a small number of large files.
4848

49-
See [output buffer](/docs/connectors#configuring-the-output-buffer) for details on configuring the output buffer mechanism.
49+
See [output buffer](/connectors#configuring-the-output-buffer) for details on configuring the output buffer mechanism.
5050

5151
## Limitations
5252

docs/connectors/sinks/http.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ print(json.dumps(json.loads(response)['json_data'], indent=4))
8484

8585
For more information, see:
8686

87-
* [Tutorial section](/docs/tutorials/basics/part2) on HTTP-based input and output.
87+
* [Tutorial section](/tutorials/basics/part2) on HTTP-based input and output.
8888

89-
* [REST API documentation](https://www.feldera.com/api/subscribe-to-a-stream-of-updates-from-a-sql-view-or-table)
89+
* [REST API documentation](/api/subscribe-to-a-stream-of-updates-from-a-sql-view-or-table)
9090
for the `/egress` endpoint.

docs/connectors/sinks/kafka.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -40,15 +40,15 @@ WITH (
4040

4141
For more information, see:
4242

43-
* [Kafka input connector](/docs/connectors/sources/kafka#how-to-write-connector-config)
43+
* [Kafka input connector](/connectors/sources/kafka#how-to-write-connector-config)
4444
for examples on how to write the configuration to connect to Kafka brokers (e.g., how to
4545
specify authentication and encryption).
4646

47-
* [Tutorial section](/docs/tutorials/basics/part3#step-2-create-kafkaredpanda-connectors) which involves
47+
* [Tutorial section](/tutorials/basics/part3#step-2-configure-kafkaredpanda-connectors) which involves
4848
creating a Kafka output connector.
4949

50-
* Data formats such as [JSON](https://www.feldera.com/docs/formats/json) and
51-
[CSV](https://www.feldera.com/docs/formats/csv)
50+
* Data formats such as [JSON](/formats/json) and
51+
[CSV](/formats/csv)
5252

5353
* Overview of Kafka configuration options:
5454
[librdkafka options](https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md)

docs/connectors/sources/datagen.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Datagen is a source connector that generates synthetic data for testing,
44
prototyping and benchmarking purposes.
55

66
For a tutorial on how to use the Datagen connector, see the
7-
[Random Data Generation](/docs/tutorials/basics/part4) tutorial.
7+
[Random Data Generation](/tutorials/basics/part4) tutorial.
88

99
## Datagen input connector configuration
1010

docs/connectors/sources/debezium.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,13 @@
22

33
:::note
44
This page describes configuration options specific to the Debezium source connector.
5-
See [top-level connector documentation](/docs/connectors/) for general information
5+
See [top-level connector documentation](/connectors/) for general information
66
about configuring input and output connectors.
77
:::
88

99
Debezium is a widely-used **Change Data Capture** (CDC) technology that streams real-time changes from databases such
1010
as PostgreSQL, MySQL, and Oracle to Kafka topics. Feldera can consume these change streams as inputs. We support
11-
Debezium streams encoded in both [JSON](/docs/formats/json) and [Avro](/docs/formats/avro) formats. Synchronizing
11+
Debezium streams encoded in both [JSON](/formats/json) and [Avro](/formats/avro) formats. Synchronizing
1212
a set of database tables with Feldera using Debezium involves three steps:
1313

1414
1. [**Configure your database to work with Debezium**](#step-1-configure-your-database-to-work-with-debezium)
@@ -236,14 +236,14 @@ columns with the same types, with the following exceptions:
236236
### JSON columns
237237

238238
Source database columns of type `JSON` and `JSONB` can be mapped to Feldera columns of
239-
either [`VARIANT`](/docs/sql/json) or `VARCHAR` type. The former allows efficient manipulation
239+
either [`VARIANT`](/sql/json) or `VARCHAR` type. The former allows efficient manipulation
240240
of JSON values, similar to the `JSONB` type. The latter is preferable when working with JSON
241241
values as regular strings, when you don't need to parse or manipulate the JSON contents of the
242242
string.
243243

244244
## Additional resources
245245

246-
* For more details on JSON support in Feldera, please refer to the [JSON Format Documentation](/docs/formats/json).
247-
* For more details on Avro support in Feldera, please refer to the [Avro Format Documentation](/docs/formats/avro).
248-
* For more information on configuring Kafka transport, visit the [Kafka Source Connector Documentation](/docs/connectors/sources/kafka).
246+
* For more details on JSON support in Feldera, please refer to the [JSON Format Documentation](/formats/json).
247+
* For more details on Avro support in Feldera, please refer to the [Avro Format Documentation](/formats/avro).
248+
* For more information on configuring Kafka transport, visit the [Kafka Source Connector Documentation](/connectors/sources/kafka).
249249

docs/connectors/sources/delta.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ batch and stream processing, offering a bridge between the two worlds.
2828
specified, and `mode` is one of `snapshot` or `snapshot_and_follow`,
2929
the snapshot of the table is ingested in the timestamp order. This setting is required
3030
for tables declared with the
31-
[`LATENESS`](https://www.feldera.com/docs/sql/streaming#lateness-expressions) attribute
31+
[`LATENESS`](/sql/streaming#lateness-expressions) attribute
3232
in Feldera SQL. It impacts the performance of the connector, since data must be sorted
3333
before pushing it to the pipeline; therefore it is not recommended to use this
3434
settings for tables without `LATENESS`.

docs/connectors/sources/http-get.md

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ Feldera can ingest data from a user-provided URL into a SQL table.
99
We will create a pipeline with an HTTP GET connector.
1010

1111
The file is hosted at `https://example.com/tools-data.json`,
12-
and is in [newline-delimited JSON (NDJSON) format](/docs/formats/json#encoding-multiple-changes)
12+
and is in [newline-delimited JSON (NDJSON) format](/formats/json#encoding-multiple-changes)
1313
with one row per line. For example:
1414

1515
```text
@@ -86,10 +86,8 @@ requests.put(
8686

8787
For more information, see:
8888

89-
* [API connectors documentation](https://www.feldera.com/api/create-a-new-connector)
90-
91-
* [Tutorial section](/docs/tutorials/basics/part3#step-1-create-http-get-connectors) which involves
89+
* [Tutorial section](/tutorials/basics/part3#step-1-configure-https-get-connectors) which involves
9290
creating an HTTP GET connector.
9391

94-
* Data formats such as [JSON](https://www.feldera.com/docs/formats/json),
95-
[CSV](https://www.feldera.com/docs/formats/csv), and [Parquet](https://www.feldera.com/docs/formats/parquet)
92+
* Data formats such as [JSON](/formats/json),
93+
[CSV](/formats/csv), and [Parquet](/formats/parquet)

0 commit comments

Comments
 (0)