Hi, this is a new contract with an Augmentative and Alternative Communication (AAC) open source project, OTTAA Project, I know of based in Argentina. You can see the website and GitHub for details. The contract is remote. The team is a passionate and driven group of people and does amazing (open source) work to enable children and adults to have a voice. Full details are below:
We are looking for someone to work on an ESP32CAM Project, the project is based on image processing, but most algorithms are already designed and proved on Python and Java, itβd be just migrating them to Arduino/C++ and adapting them to how the ESP32CAM works.
The ideal candidate will be responsible for conceptualizing and executing clear, quality code to develop the best software. You will test your code, identify errors, and iterate to ensure quality code. You will also support our customers and partners by troubleshooting any of their software issues.
Responsibilities
Write clear quality code for software and perform test reviews.
Develop, implement, and test embedded systems.
Detect and troubleshoot software issues.
Qualifications
Comfort using programming languages Embedded C & Pyhton.
What are they thoughts about using more modern alternative to C?
I am all for helping to speed up development of human augmenting technologies. I helped RHVoice project where I can, to spread a word and adopt it. But Iβve shot myself in the foot with C more often than Iβd wanted, and I feel stressful to repeat that experience. However, I have no real experience in Rust, so I want to hear the opinion of the people with money to lift such exciting initiatives.
I guess the main reason would be integration with OpenCV, Linux kernel and PipeWire (which all C/C++ based). If the cam is just an input driver to feed the image data to image processing algorithm. I havenβt found the cam project information on the website, and I am interested to see more about the task, rather than a job here.
I watched Google MediaPipe project over the years to develop fast enough gesture recognition framework for gesture slide control, so I have some background on that and can share some problems. Like for example, that MediaPipe code is open source, but data used to train models is not.
I am not asking this question privately, because I would be glad to read the answer if somebody would ask it before.