As a research engineer you will work on improving kapa’s ability to answer harder and harder technical questions. Check out Docker’s documentation for a live example of what kapa is.
Work directly with the founding team and our software engineers.
Work on and do research in state-of-the-art retrieval and search techniques.
Work on and deploy machine learning models as part of RAG.
Continuously improve our quality evaluation frameworks to enable robust iteration.
Keep up with the latest developments in the space and see how they can be applied.
Design and run experiments.
In addition to the founding team, you'll have support from a number of leading academics in the field that are all close advisors (incl. Douwe Kiela, author of the original RAG paper).
A Master's/ PhD degree in Computer Science, Machine Learning, Mathematics, Statistics or a related field.
A detailed understanding of machine learning, deep learning (including LLMs) and natural language processing.
Hands-on experience in training, fine-tuning and deploying large language models.
Have prior experience working with vector databases, search indices, or other data stores for search and retrieval use cases.
Significant experience building evaluation systems for LLMs or search.
Familiarity with various information retrieval techniques, such as lexical search and dense vector search.
The ability to work effectively in a fast in a environment where things are sometimes loosely defined.
Want to learn more about machine learning research.
* This is neither an exhaustive nor necessary set of attributes. Even if none of these apply to you, but you believe you will contribute to kapa.ai, please reach out.
Django, NextJS, and lots of custom retrieval augmented generation pipelining.
Husk at skrive i din ansøgning, at du så jobbet hos Ofir