IBM Recruitment Drive 2021

This job is hiring by IBM company for the post of data engineer.
Source: https://www.ibm.com/

IBM Off Campus Drive 2021

IBM is hiring for the position of Data Engineer (Software Development & Support).So for more such off-campus drives, internships, and govt jobs are updated on our website-http://mechomotive.com/

Job TitleData Engineer
Description5+ years experience with DataStage. 3+ years with DB2, SQL. Experience with data applications in a Cloud environment and Cloud knowledge. Knowledge on best practices across modules using Data Modelling. Experience with one or more of these – Datastores, Cloud, Kubernetes, Config management Tools DevOps knowledge.
Work LocationBangalore, KARNATAKA – India
Employment TypeFULL TIME

Introduction


Software Developers at IBM are the backbone of our strategic initiatives to design, code, test, and provide industry-leading solutions that make the world run today – planes and trains take off on time, bank transactions complete in the blink of an eye, and the world remains safe because of the work our software developers do.  Whether you are working on projects internally or for a client, software development is critical to the success of IBM and our clients worldwide.  At IBM, you will use the latest software development tools, techniques, and approaches and also work with leading minds in the industry to build solutions you can be proud of.



Your Role and Responsibilities


Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams. The ideal candidate is an experienced data pipeline builder and also a data wrangler who enjoys optimizing data systems and building them from the ground up. So the Data Engineer will support our software developers, database architects, data analysts, and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. But they must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products. So the right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.


Responsibilities:

  • Solid experience and detailed knowledge of data warehouse technical architectures, infrastructure components, ETL, ELT, DataStage and also reporting/analytic tools
  • Experience in relational database concepts with a solid knowledge of SQL.
  • Strong knowledge of various data warehousing methodologies and data modelling concepts.
  • Hands on modeling experience is highly desired
  • Knowledge of professional software engineering practices and also best practices for the full software development lifecycle, including coding standards, code reviews, source control management, build processes, testing, and operations
  • Self-motivated, excellent communication skills, team player
  • Experience and knowledge in unit testing, automated testing
  • Experience in agile development methodologies
  • Perform data analysis on large-scale datasets
  • Build dashboards to Communicate insights across Digital platforms
  • Data capturing and Transformation
  • Find elegant solutions to complex problems in delivering data and analytics
  • Hadoop development and implementation.
  • Loading from disparate data sets.
  • Pre-processing using Hive and Pig.
  • Designing, building, installing, configuring and supporting Hadoop.
  • Managing and deploying HBase.
  • Being a part of a POC effort to help build new Hadoop clusters.
  • Rich and hands on experience with Hadoop distributed frameworks also handling large amount of big data using Spark and Hadoop Ecosystems
  • Experience with Spark, Scala, and Sqoop are required

Required Technical and Professional Expertise

  • Overall 5+ years of experience
  • 4+ years of experience in Data engineering work around Data Integration, ETL, DataStage Tools, BI tools knowledge
  • At least 3+ years of experience in Hadoop ecosystem, i.e. Hadoop, Hbase, Hive, Scala, SPARK, Kafka and Python
  • Experience with relational SQL, DB2 and NoSQL databases.
  • Experience with object-oriented/object function scripting languages for example: Python, Java, C++, Scala, etc and stream-processing systems: Storm, Spark-Streaming, etc.
  • Hands on Experience in Data Modelling, Data warehousing solutions and Data APIs
  • Knowledge of algorithms and data structures, Docker, Kubernetes
  • At least 5 years of experience in software development life cycle
  • Troubleshooting critical issues
  • At least 2 years of experience in Design and architecture review. Proficiency in data analysis and strong SQL skills.
  • Experience in data warehousing, ETL tools , MPP database systems and also, experience working in HIVE & Impala & creating custom UDFs and custom input/output formats /serdes
  • Ability to acquire, compute, store and process various types of datasets in Hadoop platform

Preferred Technical and Professional Expertise

  • 5+ years experience with DataStage and also 3+ years with DB2, SQL
  • Experience with data applications in a Cloud environment and Cloud knowledge
  • Knowledge on best practices across modules using Data Modelling
  • Experience with one or more of these – Datastores, Cloud, Kubernetes, Config management Tools
  • DevOps knowledge

Apply Now: Click Here (Job Link)

Recent Job Posts

[ Important ]

  • All Company names, logos, and brands are the Intellectual Property of their respective owners. All company, product, and service names used on this website are for identification purposes only.
  • We are not associated with any company/agency/agent whose jobs are posted on mechomotive.com, We are just an information provider for job openings. Read our Disclaimer Policy and Term of Service for more information

For more job offers, CLICK HERE