Streaming
Readable Stream
It is to represent the data from source and able to consume it by chunks
For example: File (
fs.createReadableStream) , S3 Object(Get Request), HTTP Request
var fs = require("fs");
var { Readable } = require("stream");
var data = '';
// Create a readable stream to read the file
var readerStream = fs.createReadStream('file.txt');
readerStream.setEncoding('UTF8'); // Set the encoding to be utf8.
// Create a readable stream from string
var readerStream = Readable.from("");
// push the chunks into stream
for(let i = 0 ; i < 1 ; i++) {
stream.push("123")
}
console.log("stream end");
// end the stream
stream.push(null);
// listen the data from the readable stream
readerStream.on('data', function(chunk) {
data += chunk;
});
// listen the stream is finished
readerStream.on('end',function() {
console.log(data);
});
// listen to the error
readerStream.on('error', function(err) {
console.log(err.stack);
});
// consume the content of chunk from readable stream
for await(const chunk of readerStream){
console.log(chunk);
}
console.log("Program Ended");
Write Stream
It is to represent the destination and able to write into it by chunks
E.g: File (
fs.createWritableStream) , S3 Object (Put Request), HTTP ResponseData will be copied into a buffer of a memory and then flush it into destination
If you are writing data very fast, the disk won't probably be able to keep up and the actual buffered data will grow. If this keeps happening you will easily fill all the memory, which is called B
ackpressureIn fact, every single time we call
.write(data)on a stream, the stream will return abooleanvalue. If this value istrueit means that it is safe to continue, if the value isfalseit means that the destination is lagging behind and the stream is accumulating too much data to write. In this last case, when you receivefalse, you should slow down, or even better stop writing until all the buffered data is written down.A writable stream will emit a
drainevent once all the buffered data has been flushed and it's safe to write again.
Pipe
Connects the Readable stream to the Writable stream: all the data read from
readableis automagically™️ copied over thewritablestream, chunk by chunk.Handles the
endevent properly, once all the data is copied over the writable, both readable and writable are properly closed.It returns a new instance of a stream so that you can keep using
.pipeto connect many streams together. This is mostly useful with transform streams and it allows us to create complex pipelines for data processing.
We can pass an arbitrary number of streams followed by a callback function. The streams get piped in order and the callback is called when all the processing is done or in case there's an error. If there's an error all the streams are properly ended and destroyed for you. So with this utility we could rewrite our previous example as follows:
Duplex Stream

A Duplex stream is essentially a stream that is both Readable and Writable.
Its main purpose is to act as a middleware to read the data from readable stream chunk by chunk and then handle it, and write to writable stream chunk by chunk
Transform stream is a one of the example of duplex stream
Reference
Last updated
Was this helpful?