By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
To advance the organization by developing algorithms to build models that uncover connections and make better decisions without human intervention.
Role
Your primary responsibility in this role is to research and prototype new ideas using machine learning and deep learning techniques to speed up the creation.
Authority
Develop models and train them
Research on new technologies
Participate in recruitment process
Responsibility
Architect, build, maintain, and improve new and existing suite of algorithms and their underlying systems,
Implement end-to-end solutions for batch and real-time algorithms along with requisite tooling around monitoring, logging, automated testing, performance testing, and A/B testing,
Utilize your entrepreneurial spirit to identify new opportunities to optimize business processes and improve consumer experiences, and prototype solutions to demonstrate value with a crawl, walk, run mindset,
Work closely with data scientists and analysts to create and deploy new product features,
Establish scalable, efficient, automated processes for data analyses, model development, validation and implementation,
Write efficient and well-organized software to ship products in an iterative, continual-release environment,
Contribute to and promote good software engineering practices across the team,
Knowledge sharing with the team to adopt best practices,
Actively contribute to and re-use community best practices.
Requirements
University or advanced degree in engineering, computer science, mathematics, or a related field
5+ years experience developing and deploying machine learning systems into production
Strong experience working with a variety of relational SQL and NoSQL databases
Strong experience working with big data tools: Hadoop, Spark, Kafka, etc.
Experience with at least one cloud provider solution (AWS, GCP, Azure)
Strong experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc
Ability to work in a Linux environment
Industry experience building innovative end-to-end Machine Learning systems
Ability to quickly prototype ideas and solve complex problems by adapting creative approaches
Experience working with distributed systems, service-oriented architectures and designing APIs
Strong knowledge of data pipeline and workflow management tools
Expertise in standard software engineering methodology, e.g. unit testing, test automation, continuous integration, code reviews, design documentation
Relevant working experience with Docker and Kubernetes is a big plus
Benefits
It's always a good idea to include the benefits of the job the company will provide such as:
Flexible hours to give you freedom and increase productivity,