About the position
Summary of Role
The company is undertaking a structured BI Delivery Program to strengthen its analytical capabilities and promote data-driven decision-making across its supply chain operations. As part of this initiative, a six-month engagement has been established to automate data ingestion, transformation, and reporting processes, thereby improving visibility, efficiency, and accuracy in reporting.
Responsibilities:
- Design and implement data pipelines using Azure Data Factory.
- Ensure data lineage, transformation quality, and reconciliation.
- Integrate multiple data sources into the enterprise data lake.
- Build or extend existing tabular models in Azure Analysis Services or Power BI.
- Implement hierarchies, measures (DAX), and business logic to align with reporting needs.
- Conduct unit, integration, and UAT testing.
- Prepare test evidence, design documents, and deployment notes.
- Manage deployment through CAB-aligned processes.
- Ensure all pipeline designs, data models, transformation logic, and dependencies are clearly documented. Conduct a walkthrough session with relevant stakeholders (analytics, IT, or operations teams) to explain the solution structure and usage.
- Commit final code to the version control repository with proper tagging, ensure deployment scripts or configurations are updated, and verify that environments (dev, test, prod) are aligned and accessible to support or analytics teams.
- Perform data validation checks with the end users, resolve any anomalies, and hand over a monitoring checklist detailing how to track performance, logs, and alerts for ongoing maintenance and troubleshooting.
- ETL pipeline development, data integration, data model optimization, and automation
- Technical design documents (approved by Design Authority)
- Test cases, validation results, Internal skill development
- Weekly progress reports and sprint demos
- Use DevOps CI/CD pipelines
- Vendor team will lead deployments in partnership with AMS teams.
- Standard operating procedures must be followed for cutovers.
- Each deployment must include a defined hyper care period.
- Project handover to ensure a smooth transition of all deliverables, documentation, and operational responsibilities from the development team to the Supply Chain and AMS teams.
- A comprehensive walkthrough of the data pipelines, data models, and reporting solutions, alongside the review of system configurations, deployment scripts, and monitoring procedures.
Requirements - Matric and a Bachelors degree in Computer Science, Information Systems, Engineering, Mathematics, or a related technical discipline.
- Masters degree in a relevant field is an added advantage.
- 8 years of hands-on experience in data engineering, ETL/ELT development, or data architecture roles.
- Proven experience building and maintaining data pipelines in a cloud-based environment (Azure, AWS, or GCP).
- Experience working with large-scale data processing technologies (e.g., Spark, Databricks, Hadoop).
Desired Skills:
- Data Engineering
- ETL
- Data Modeling
- Big Data
- DevOps
Desired Qualification Level:
About The Employer: