September 26-27, 2023 - Meet us at Current hosted by Confluent! Stop by booth #211 to hear how you can build streaming data applications in minutes, not months. / Register Now

Data Connectors

Incorporate Context from Any Source

Configure bidirectional connectors to in‑motion and at‑rest data sources.

Nstream portal UI

Why data connectors matter

Data is meaningless without context

Nstream data connectors make it easy to enrich stateful entities with related context from all relevant data sources. Giving each entity a comprehensive real-time picture of its real-world environment enables context-sensitive insights and situationally-aware automated action.

Messaging connectors

Subscribe to message broker topics and route incoming messages to relevant stateful entities for real-time processing.

Database connectors

Periodically query databases for related context, and cache the results in stateful entities for fast in-memory access.

Egress connectors

Publish up-leveled synthetic events back to message brokers, or periodically sample and batch write derivative big data sets.

What data connectors enable

Ubiquitous context

Continuously join multiple in-motion and at-rest data sources using stateful entities as deterministic points of intersection.

Kafka / Confluent

Consume high volume partitioned topics, and selectively publish up-leveled and aggregated insights back to Kafka.

Pulsar

Keep up with low-latency Pulsar topics, and publish back high level state changes that can't be computed by Pulsar Functions.

RabbitMQ

Coordinate with event-driven microservices and other enterprise systems that communicate via AMQP queues.

JDBC

Periodically query infrequently changing relational context in the background and cache the results in stateful entities.

HTTP

Poll REST APIs at low rates, and map the responses onto stateful entities where they can be efficiently accessed at high rates.

File

Replay data from CSV files to get up and running quickly with snapshots of streaming and relational datasets.

How data connectors work

From data parallel to entity parallel

Traditional data parallel architecture struggle to join multiple high rate data sources together at low latency due to inconsistent scattering of data across machines. Nstream’s entity parallel architecture, by contrast, deterministically maps ingested data onto stateful entities, enabling business logic to compute on joined state with memory latency.

Uniform partitioning.
All data about a given uniquely identifiable object gets routed the same stateful entity, effectively joining all data sources together at ingestion.
Seamless serialization.
Automatic detection and decoding of JSON, XML, CSV, and Avro enables business logic to work directly with format-agnostic structured data.
Just another Web Agent.
Data connectors run as ordinary Web Agents, making them flexible and easy to implement, deploy and manage with a single set of tooling.

More platform capabilities