Qualimental Technologies is a data science and web development consultancy and we are looking for an experienced Data Engineer / Data Ops.
- Databases: We work with a broad range of database types including MongoDB, Neo4j, MySQL, BigQuery, PostgreSQL, and similar.
- Data models: Each project contains a database and a set of ETL pipelines as well as a content delivery API and front end application. It is crucial the database is developed properly and data models are adequate for optimal performance and storage.
- Analytics/BI: We work massively with analytic data and platforms. We often need to instrument our applications with analytic data capturing code, store it and visualise it within analytics applications, which includes database and data model design.
- In-house products: We are also developing several data focused products in-house. All products require planning and design of components to make sure the products are optimally developed and ready for scaled operations.
- ETL pipelines: We are using Python and a PETL library to develop ETL pipelines and we are looking for a data engineer/ data ops who would be able to work with the developed pipelines, ideally develop them further.
- Setup new database servers on virtual machines, cloud, or any other managed or bespoke service as per project requirements with adequate security level. Create users and distribute passwords securely.
- Copy, move, migrate databases between different servers.
- Define the technologies for optimal data engineering and data ops solutions and implement those technologies over time into our workflows.
- Plan and implement data models and structure databases optimally to fit project requirements.
- Be able to test database performance and identify bottlenecks, design and implement indexing of data to optimise performance.
- Organise database backups and restore them in case of an emergency or database failure.
- Work with our data science, devops and backend teams to use data models in the most optimal way.
- It will be beneficial to be able to develop ETL pipelines in Python environment using PETL and write Python scripts to manage data.
- Have more than 2 years of experience as a data engineer or data ops.
- Strong experience working with MongoDB and PostgreSQL, and any experience with Neo4j or other graph databases is beneficial. Good understanding on how to configure PostgreSQL, MongoDB, Neo4j and other databases for secure setup and performance tuning. Experience with performance tuning and indexing is important.
- Skills with securing databases and providing different levels of user access and control of access permissions.
- Experience with AWS, Google Cloud, MS Azure and similar platforms.
- Experience with Python based scripting and developing ETL pipelines.
- Experience with data models and setting up data structures to be consumed by content APIs, optimising queries for performance and making sure data flows are without bottlenecks and mitigate bottlenecks.
- We have an existing team of data scientists, product managers, analysts, front end developers, and Python backend developers, and we are looking to add a role of Data Engineering as we grow.
- Flexible working time, part time for the upcoming few months and have potential to work more full-time and long term.
- Freelancer, remote working.
Apply to email@example.com by sending your cover letter and a CV.