Search thousands of fresh jobs

×
This job is expired
Carlysle Human Capital

Head of Data Engineering

Carlysle Human Capital

  • R Undisclosed
  • Permanent Senior position
  • Durban (Durban CBD)
  • Posted 04 Sep 2024 by Carlysle Human Capital
  • Expires in 19 days
  • Job 2578158 - Ref DBN000858

About the position

Our client, based in Durban, is looking for a strategic and architectural level Engineering Specialist in this newly created role.

The main purpose of this role is to build from scratch their big data/bioinformatics solutions (including all hardware and software aspects). This will be a senior management role with decision-making responsibility. A key aspect of this role will be to define the strategy and then to design and roll out the entire infrastructure.

This role will suit someone from a Scientific institution or a university that has worked with scientific research solutions.

They have a implemented a hybrid working model with 2 days in the office and 3 [Email Address Removed] can also be based either in Durban OR in Somkhele (Richards Bay area).

You will be required to design, plan, setup, configure, support high performance data pipelines for scientific compute resources, working closely with the science team. The science team will have the science experience but not the technical experience and they are looking to bridge that gap. This person will sit within the IT team and provide assistance to the science departments.
 
Likely candidates will have past experience working at universities or other research and science related institutions, or some larger corporate type roles that do similar data heavy computation work.
 
They will be skilled in many niche technical aspects including databases, but this is not just a DBA only role. They must be skilled on the technical side of building the data pipelines which may involve Cloud and/or hardware and software skills and knowledge.
You will help build and maintain scalable data pipelines and related systems in a research focused environment. You will be responsible for designing, developing, testing, and deploying data solutions that meet the business requirements and align with the scientific goals. You will collaborate with research scientists, internal IT and other stakeholders to ensure data quality, reliability, accessibility, security and governance, as follows:

  • Design, develop, and maintain end-to-end technical aspects of all data pipelines required to support the research scientists and data managers
  • Support ETL processes including, data ingestion, transformation, validation, and integration processes using various tools and frameworks
  • Optimize data performance, scalability, and security
  • Provide technical guidance and support to data analysts and research scientists.
  • Design data integrations and data quality frameworks
  • Work and collaborate with the rest of the IT department to help develop the strategy for long term scientific Big Data platform architecture
  • Document and effectively communicate data engineering processes and solutions.
  • Make use of and help define the right cutting edge technology, processes and tools needed to drive technology within our science and research data management departments.

Minimum Qualifications:

  • Bachelor's degree or higher in Computer Science, IT, Engineering, Mathematics, or a related field
  • Industry recognized IT related certification and technology qualification such as Databases and Data related certifications.
  • This is a technical role so a strong focus needs to be on technical skills and experience

Minimum Experience:

  • 7+ years’ experience in a Data Engineering, High Performance Computing, Data Warehousing, Big Data Processing
  • Strong experience with high performance computing environments including Unix, Docker, Kubernetes, Hadoop, Kafka, Nifi or Spark or Cloud-based big data processing environments like Amazon Redshift, Google BigQuery and Azure Synapse Analytics
  • At least 5 years’ advanced experience and very strong proficiency in UNIX, Linux, Windows
  • Knowledge of various data related programming, scripting or data engineering tools such as Python, R, Julia, T-SQL, PowerShell, etc.

Knowledge and Abilities:

  • Strong Experience working with various relational database technologies like MS SQL, MySQL, PostgreSQL as well as NoSQL databases such as MongoDB, Cassandra etc.
  • Experience of Big Data technologies such as Hadoop, Spark and Hive
  • Experience with data pipeline tools such as Airflow, Spark, Kafka, or Dataflow
  • Experience working with containerization is advantageous
  • Experience with data quality and testing tools such as Great Expectations, dbt, or DataGrip is advantageous
  • Experience working with Big Data Cloud based (AWS, Azure etc) technologies is advantageous
  • Experience with data warehouse and data lake technologies such as BigQuery, Redshift, or Snowflake advantageous
  • Strong Experience designing end-to-end data pipelines.
  • Strong knowledge of data modeling, architecture, and governance principles
  • Strong Linux Administration skills
  • Programming skills in various languages advantageous
  • Strong data security and compliancy experience
  • Excellent communication, collaboration, and problem-solving skills
  • Ability to work independently and as part of a cross-functional team
  • Interest and enthusiasm for medical scientific research and its applications.

SUMMARY:

There is large emphasis on the technical element in the role (having experience working with and designing hardware clusters from the ground up). They have a number of technical skills and experience already within the IT department but are looking for someone very strong, and with the past working experience, to help design these clusters and understand the technology in play for advanced scientific computational requirements. 

This goes beyond just the hardware element though and you will need strong Linux, Data Pipeline and coding/scripting skills. Although this is not a developer role, you should be comfortable with a certain level of coding/scripting and data analysis packages with strong virtualization skills like VMware, HyperV, OpenStack, KVM etc.

You will therefore be that technical link with the scientists and will help build platforms (could be hardware, software, cloud etc) and then also manage the Data and Data Pipelines on this system. This would include the compliancy, performance and security of the data too.




Desired Skills:

  • Data Engineering
  • ETL
  • Data Optimization
  • Data Warehousing
  • Big Data Processing
  • Big Data Cloud

Employer & Job Benefits:

  • Medical Aid
  • Pension

Carlysle Human Capital

About the agency

As a PROUDLY SOUTH AFRICAN Company, we specialize in providing high level consulting and resourcing capacity for the Corporate Professional, Healthcare, IT and FMCG recruitment sector. Carlysle Human Capital is a Human Resource Consultancy with sourcing ability countrywide due to established relationships, and technology-based search techniques. Staffed by a competent team of full time consultants, we differentiate ourselves through our newly patented* quality recruitment processes. *Inter-C-View

Receive a daily digest of all new jobs matching this job. Your information is safe with us and you can cancel any time.

Expires in 18 days

Email me jobs similar to: Head of Data Engineering

Receive a daily digest of all new jobs matching this job: Senior IT Auditor. Your information is safe with us and you can cancel at any time.