- **~4 GB** of memory on heavier operations. Command below sets the swap file size to 4GB.
```
sudo dphys-swapfile swapoff
sudo sed -i 's/^\(CONF_SWAPSIZE=\).*/\14096/' /etc/dphys-swapfile
sudo dphys-swapfile setup
sudo dphys-swapfile swapon
sudo reboot
```
If the test cases does not request a large portion of the canvas ***AND*** resizes it to a relatively still large ratio (i,e, ~0.9) **at the same time**, this might not be necessary.
- **Access to** the `mkfifo` command, satisfied by default bar very special cases
If running via dotnet run, Download the `.nupkg` file [here](https://null.formulatrix.dev/reinardras/stitch_something/releases/tag/0.0.0) and save on `LocalNuget` before running. Otherwise it shouldn't be necessary.
If there's still a problem with library being missing, download the `.so` file and put it on `/usr/local/lib`.
This submission contains no specific magic in the image processing, just OpenCVSharp stretched to the best of its ability according to my knowledge. This section contains a brief overview of the main features.
Per the writer's knowledge, the end result is **fast enough for the operation to be bottlenecked by network transfer speed** instead of image processing, except when resizing.
*(note: I don't do rigorous benchmarks for that, take it with a grain of salt)*
### Canvas Memory Usage
During initialization, a blank 55x31 of 720x720 canvas is created, along with a 55*31 grid of enums indicating whether a chunk is already loaded, is currently being loaded, or ready to use.
The memory usage of this canvas follows what chunks has been loaded into it, topping at ~2.6GB when all chunks are loaded.
When multiple requests refer to the same region of the canvas, it doesn't need to be loaded again.
### Coordinate Processing
When parsing the request, it's possible that the requested canvas size doesn't correspond to what chunks that will actually be read; for example `A1:A3` at no offset and `(0.2, 1)` crop will only read some parts of `A1` chunk.
To handle that, the resulting global crop RoI is calculated first and the chunks that are *actually* needed is identified (referred as *Adapted Sectors of Interest*).
### Chunk Loading
OpenCV Mats are thread safe given any operation is performed on non-overlapping regions of it. This allows multitasking reading the chunk to the main canvas.
After coordinate processing, chunks in the adapted SoI are checked for their load status, then processed accordingly.
The status "Currently being loaded" is relevant when multiple requests requiring the same chunk(s) are underway; on such case the loader that checks later spinlocks until it's done loading.
**This mechanism enables the shared canvas to serve multiple processes.**
After the needed chunks are certain to be loaded to the main canvas, next is cropping it; which is a trivial operation in OpenCV, not requiring any extra memory since it still refers to the main canvas.
What's not trivial is *encoding* it, which after some quick tests shows to take longer than reading and decoding multiple images from the canvas.
OpenCV provides two functions that can help decode to PNG:
-`Cv2.ImWrite` that writes to a file, and
-`Cv2.ImEncode` that writes to a byte array.
Both of these requires the encoding to finish before the resulting data can be used.
To alleviate this problem, a **named pipe** (some sort of file pointer that works as a pipe buffer) is used; `ImWrite`-ing to said named pipe and have ASP.NET read from it. By doing this:
- The encode and send process is parallelized
- No extra memory needs to be allocated to encode the image; either on storage or RAM
### (Unsolved) Resizing
This remains as the only pain point that's not straightforward to solve. If no resize is requested, it's solvable by cropping off the main canvas and encoding it to a named pipe; eliminating a lot of time and memory overhead on the way.
If resize is requested, a new Mat containing the resized image needs to be allocated.
Ideas for this problem:
- resize function that outputs a stream,
- Imencode/imwrite function that accepts a stream.