Click the Facebook, Google+ or LinkedIn icons to share this job with your friends or contacts. Click the Twitter icon to tweet this job to your followers. Click the link button to view the URL of the job, which then can be copied and pasted into an e-mail or other document.
PHOENIX, AZ 85067
This Job was posted over 30 Days Ago on 01-27-2016Find latest similar jobs Sign up for similar job alert!
Job ID :
2 years ago
Job Seekers, Please send resumes to firstname.lastname@example.org or Call: (202) 719-0200 Ext: 127
RoleTechnology Lead – US
SkillsetGrid , Hadoop Hive Service
Wanted: Global Innovators To Help Us Build Tomorrow’s Enterprise
In the role of Technology Lead, you will interface with key stakeholders and apply your technical proficiency across different stages of the Software Development Life Cycle including Requirements Elicitation, Application Architecture definition and Design. You will play an important role in creating the high level design artifacts. You will also deliver high quality code deliverables for a module, lead validation for all types of testing and support activities related to implementation, transition and warranty. You will be part of a learning culture, where teamwork and collaboration are encouraged, excellence is rewarded, and diversity is respected and valued.
Location for this position is Phoenix, AZ, USA. This position may require 100% relocation.
Applicants for employment in the U.S. must possess work authorization which does not require sponsorship by the employer for a visa (H1b or otherwise).
• Bachelor’s degree or foreign equivalent required. Will also consider one year of relevant work experience in lieu of every year of education
• At least 5 years of Design and development experience in Big data, Java or Datawarehousing related technologies
• Atleast 3 years of hands on design and development experience on Big data related technologies – PIG, Hive, Mapreduce, HDFS, HBase, Hive, YARN, SPARK, Oozie, Java and shell scripting
Mandatory Technical Skills
• Skills: – PIG, Hive, Mapreduce, HDFS, HBase, Hive, YARN, SPARK, Oozie, Java and shell scripting
• Strong understanding of hadoop fundamentals
• Experience in all the life cycle phases of the project
• Strong understanding of RDBMS concepts and experience with relational databases, preferably MySQL
• Should have worked on large data sets and experience with performance tuning and troubleshooting
• Should be a strong communicator and be able to work independently with minimum involvement from client SMEs
• Should be able to work in team in diverse/ multiple stakeholder environment
• Experience in NoSQL Databases is preferred
• Experience to Financial domain is preferred
• Experience and desire to work in a Global delivery environment