- Implementing the Project in AGILE based development Environment and participate in Daily standups, Backlog Refinement, Sprint Planning 1 & 2 and Retrospective meetings.
- Involving in the phases of SDLC Analysis, Design phase, Development, UAT and Production phase of the application.
- Conduct logical and physical database design.
- Assemble large, complex data sets that meet functional/non-functional business requirements.
- Design ETL processes and data pipelines to build large, complex datasets.
- Build, test and validate analytical and statistical models.
- Create centralized data stores to feed other applications.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and ‘big data’ technologies.
- Implement, configure, administer and monitor Hadoop clusters.
- Provide technical assistance to junior team members.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Using Maven build tool to build and deploy the application.
- Design and develop micro-services that read data from Kafka and stores to MaprDB.
- Create docker images and spin up containers on which the applications can run as a micro-service.
- Creating JAR files and loading it to the central artifactory for other teams to use.
- Working with source code version control GIT/GitHub and Used GIT for branching and merging of source code management.
- Design, Development and create program specifications and unit test plans for Quality Management.
Bachelor’s degree in Computer Science or a closely related field