First Commit
This commit is contained in:
parent
bb40883c7d
commit
696158848f
18 changed files with 787 additions and 0 deletions
101
README.md
Normal file
101
README.md
Normal file
|
|
@ -0,0 +1,101 @@
|
|||
# I Paid for 4 Gigabytes of RAM So I Will Use All 4 Gigabytes of RAM and Probably More
|
||||
Submission for Stitch-A-Ton Contest.
|
||||
|
||||
## Prerequisites
|
||||
- **Dotnet installation**
|
||||
- **~4 GB** of memory on heavier operations. Command below sets the swap file size to 4GB.
|
||||
```
|
||||
sudo dphys-swapfile swapoff
|
||||
sudo sed -i 's/^\(CONF_SWAPSIZE=\).*/\14096/' /etc/dphys-swapfile
|
||||
sudo dphys-swapfile setup
|
||||
sudo dphys-swapfile swapon
|
||||
sudo reboot
|
||||
```
|
||||
If the test cases does not request a large portion of the canvas ***AND*** resizes it to a relatively still large ratio (i,e, ~0.9) **at the same time**, this might not be necessary.
|
||||
- **Access to** the `mkfifo` command, satisfied by default bar very special cases
|
||||
- **OpenCVSharp4 Runtime for Raspberry Pi 5**
|
||||
A copy exists in this repository's release page.
|
||||
|
||||
|
||||
## Running
|
||||
On root directory:
|
||||
```
|
||||
ASSET_PATH_RO=<path> dotnet run --project StitchATon --profile deploy
|
||||
```
|
||||
|
||||
The API is accessible at `:5255`, providing the following api:
|
||||
- [POST] `api/image/generate`: complies with competition guidelines
|
||||
- [GET] `api/image/sanity`: generates a predefined crop region:
|
||||
|
||||
`G6:I8`, `(.1, .1)` offset, `(.8,.8)` crop, at 0.6 scale.
|
||||
|
||||
To browse the API, prepend `ASPNETCORE_ENVIRONMENT=Development` to the command and go to `/swagger/index.html`.
|
||||
|
||||
## Writeup
|
||||
|
||||
This submission contains no specific magic in the image processing, just OpenCVSharp stretched to the best of its ability according to my knowledge. This section contains a brief overview of the main features.
|
||||
|
||||
Per the writer's knowledge, the end result is **fast enough for the operation to be bottlenecked by network transfer speed** instead of image processing, except when resizing.
|
||||
|
||||
*(note: I don't do rigorous benchmarks for that, take it with a grain of salt)*
|
||||
|
||||
### Canvas Memory Usage
|
||||
During initialization, a blank 55x31 of 720x720 canvas is created, along with a 55*31 grid of enums indicating whether a chunk is already loaded, is currently being loaded, or ready to use.
|
||||
|
||||
The memory usage of this canvas follows what chunks has been loaded into it, topping at ~2.6GB when all chunks are loaded.
|
||||
|
||||
When multiple requests refer to the same region of the canvas, it doesn't need to be loaded again.
|
||||
|
||||
### Coordinate Processing
|
||||
When parsing the request, it's possible that the requested canvas size doesn't correspond to what chunks that will actually be read; for example `A1:A3` at no offset and `(0.2, 1)` crop will only read some parts of `A1` chunk.
|
||||
|
||||
```
|
||||
canvas
|
||||
┌─────────────────┬─────────────────┬─────────────────┐
|
||||
│ ┌───────────┐ │ │ │
|
||||
│ │ │ │ │ │
|
||||
│ │ final │ │ │ │
|
||||
│ │ result │ │ │ │
|
||||
│ │ │ │ │ │
|
||||
│ └───────────┘ │ │ │
|
||||
└─────────────────┴─────────────────┴─────────────────┘
|
||||
```
|
||||
|
||||
To handle that, the resulting global crop RoI is calculated first and the chunks that are *actually* needed is identified (referred as *Adapted Sectors of Interest*).
|
||||
|
||||
### Chunk Loading
|
||||
OpenCV Mats are thread safe given any operation is performed on non-overlapping regions of it. This allows multitasking reading the chunk to the main canvas.
|
||||
|
||||
After coordinate processing, chunks in the adapted SoI are checked for their load status, then processed accordingly.
|
||||
|
||||
The status "Currently being loaded" is relevant when multiple requests requiring the same chunk(s) are underway; on such case the loader that checks later spinlocks until it's done loading.
|
||||
|
||||
**This mechanism enables the shared canvas to serve multiple processes.**
|
||||
|
||||
### Serving and Encoding cropped image
|
||||
After the needed chunks are certain to be loaded to the main canvas, next is cropping it; which is a trivial operation in OpenCV, not requiring any extra memory since it still refers to the main canvas.
|
||||
|
||||
What's not trivial is *encoding* it, which after some quick tests shows to take longer than reading and decoding multiple images from the canvas.
|
||||
|
||||
OpenCV provides two functions that can help decode to PNG:
|
||||
- `Cv2.ImWrite` that writes to a file, and
|
||||
- `Cv2.ImEncode` that writes to a byte array.
|
||||
|
||||
Both of these requires the encoding to finish before the resulting data can be used.
|
||||
|
||||
To alleviate this problem, a **named pipe** (some sort of file pointer that works as a pipe buffer) is used; `ImWrite`-ing to said named pipe and have ASP.NET read from it. By doing this:
|
||||
- The encode and send process is parallelized
|
||||
- No extra memory needs to be allocated to encode the image; either on storage or RAM
|
||||
|
||||
### (Unsolved) Resizing
|
||||
|
||||
This remains as the only pain point that's not straightforward to solve. If no resize is requested, it's solvable by cropping off the main canvas and encoding it to a named pipe; eliminating a lot of time and memory overhead on the way.
|
||||
|
||||
If resize is requested, a new Mat containing the resized image needs to be allocated.
|
||||
|
||||
Ideas for this problem:
|
||||
- resize function that outputs a stream,
|
||||
- Imencode/imwrite function that accepts a stream.
|
||||
|
||||
|
||||
|
||||
Loading…
Add table
Add a link
Reference in a new issue