File Stream Operations

Accessing file system either by reading or writing from the disk can become a source for performance bottlenecks if designed poorly. Appearance of SSD storage has alleviated but not mitigated the problem.

Start your free trial

How Plumbr will help you

As we can see from the root cause exposed by Plumbr, invoking the service took 18 seconds to complete. In case of a large file being streamed it might be expected behavior, but in the current example it can be seen that the file in question was just 29MB in size.

The problem is exposed by Plumbr and is related to 30+ million read() operations carried out in the filesystem. Looking at the right panel in the example above, we see that culprit is hidden in FileSystemService.fileSize() method, reading the file byte-by-byte. Here is a part of its source code:

int bytes = 0;

while ( >= 0) {

return bytes;

The Solution

There are two common problems that happen when reading from or writing to a file stream and leading to performance bottlenecks:

  • Lack of buffering: each read or write operation incurs overhead, depending on the operating system, file system and hardware. Instead of reading or writing one byte at a time, a much more performant approach would be to do it in bulk. A simple approach would be to make use of a BufferedInputStream or BufferedOutputStream
  • System issues: like we said above, the performance of file operation depends on the operating system, the file system and the hardware. It is sometimes the case that one of these becomes the bottleneck, and even a single file stream operation could take tens of seconds.

In the current example, leaving aside the issue of integer overflow, we see that the size of the file is determined by reading it one byte at a time, until EOF or null byte is reached. The fix is very straightforward: simply replacing the whole loop with one invocation of File.length(). Just this change was enough to remove the extra 18 seconds from the wait time.

Start your free trial