About the position
Key Responsibilities: Collaboration and Cross-functional Support
Data Engineering & Pipeline Management
Data Quality, Governance, and Compliance
Desired Skills:
- Advanced SQL Server Development
- Strong proficiency in T-SQL
- ETL and Data Warehousing
- ETL/ELT pipeline design
- Data Modeling
Desired Work Experience:
- 5 to 10 years
Desired Qualification Level:
- Degree
About The Employer:
A tertiary qualification in Computer Science, Information Systems, Data Engineering, Analytics, Mathematics, or Statistics or Matric with 6-8 years of
- experience in data engineering, database development, or data management in production environments.
- Proven hands-on experience with SQL Server, including advanced T-SQL development, ETL/ELT workflow design, and performance tuning.
- Demonstrated delivery of production data solutions-both batch and near real-time-within enterprise environments.
- Experience in building and maintaining data warehouses, feature stores, and reusable data products.
- Track record of implementing data governance and quality frameworks, ensuring compliance and traceability.
- Experience in orchestrating complex data pipelines using SQL Server Agent, SSIS, Airflow, or Azure Data Factory.
- Familiarity with cloud-based data architectures (Azure preferred) and version control systems (Git).
- Exposure to Power BI or equivalent visualization tools for reporting and analytics enablement.
- Strong understanding of data security, privacy, and regulatory compliance requirements.
Key Skills and Competencies:
- Advanced SQL Server Development: Strong proficiency in T-SQL, stored procedure design, query optimization, indexing, and error handling.
- ETL and Data Warehousing: Expertise in ETL/ELT pipeline design and orchestration for batch and near real-time processing using SQL Server Agent, SSIS, or Azure Data
- Factory.
- Data Modeling: Solid understanding of normalized and dimensional modeling (3NF, star, snowflake) and scalable architecture design.
- Feature Store Development: Ability to design and maintain reusable feature tables supporting machine learning and operational scoring.
- Data Validation and Quality Assurance: Skilled in implementing validation rules, reconciliation checks, and QA frameworks to ensure data integrity.
- Data Governance and Security: Strong knowledge of data governance, privacy, and compliance standards; experience maintaining data lineage documentation.
- Workflow Orchestration: Experience building restartable, traceable workflows with checkpoint and rollback mechanisms.
- Programming and Scripting: Proficiency in SQL and beneficial experience in Python or R for automation and data manipulation.
- Cloud Platforms: Familiarity with Azure (preferred) or other cloud platforms such as AWS or GCP for data engineering workloads.
- Version Control and CI/CD: Exposure to Git and CI/CD pipelines for managing data workflow deployment.
- Visualization and Reporting (Beneficial): Ability to prepare scored or curated data for BI tools such as Power BI.
- Performance Optimization: Expertise in performance tuning, query profiling, and indexing strategies to optimize large-scale data operations.
- Collaboration and Communication: Ability to work effectively across technical and business teams, translating complex requirements into practical data solutions.