IBM Company Associate Developer Jobs |Hyderabad Jobs |Vizag Jobs
Hi friends! IBM Company is hiring for Various Software Positions. this is a good opportunity for Engineering Degree candidates to obtain an entry in a multinational company. Find out the details of the posts related to this job, educational qualification, salary, training period, benefits, selection process and other details through this article and Apply Online Now.
📌NOTE: కంపెనీ Official Notification ఈ Article Down లో ఉంటుంది, Check/ Scroll Down చేయండి.👇
Work Location:
Mysore, Ahmedabad, Coimbatore, Lucknow, Hyderabad, Kolkata, Mumbai, Pune, Chennai, Noida, Visakhapatnam, Navi Mumbai, BANGALORE, Kochi, Bhubaneswar, Maharashtra, Telangana, Karnataka Gujarat, Tamil Nadu, West Bengal, Uttar Pradesh, Kerala, Odisha, Andhra Pradesh, India
🎓Eligibility Criteria:
- Preferred technical and professional experience
- Engineering Background, Problem Solving.
- Good interpersonal skills.
- Should be flexible to work from anywhere in India.
- Year of Pass out 2026.
- BE / BTech Computer Science (all CS batches like CSE, AIML , DS , Cloud Computing, Big Data Analytics, CSBS , IOT , Robotics, AI , Cyber Security, Block Chain to name a few) or Information Technology with 6 CGPA/60%.
- Fluent Communication skills (written and spoken).
- No active backlogs.
👉Role1: Cloud Full stack Engineer:
- Expertise in conceptualization and design of new and efficient Agents to perform common and repeatable integration between applications and systems leveraging GenAI and Agentic AI capabilities to produce architectural artifacts
- Ability to generate API definition for integrating different applications and use frameworks (e.g. Swagger Codegen, etc.) to generate code structure using GenAI, Agentic AI and Assistants
- Expertise of various routing protocols and stateful streaming (e.g. Apache Flink, Kafka, etc.) for workflow orchestration and event driven operations
- Deep knowledge across hyperscalars (e.g. AWS, Azure, Red Hat, etc.) and deployment models for an optimized resource usage and cost
- Expertise in designing continuous integration, continuous delivery and automation workflows (e.g. GITOPs pipeline using Argo CD, Jenkins, Terraform, etc.)
👉Role2: Experience Frontend Engineer:
- Leverages GenAI, AI Agents and Assistants in Web and Frontend engineering activities to deliver responsive web applications using languages (e.g., JavaScript, TypeScript) and frameworks (e.g., React, Angular, Vue.js)
- Build new and efficient Agents to perform common and repeatable activities such as component generation, accessibility audits, and performance tuning
- Ability to engineer and deploy web and frontend applications and products using state-of-the-art design patterns, integrating UI/UX components, and handling client-side data management
- Expertise in multiple layers of web architectures, including languages (e.g., JavaScript, TypeScript), frameworks (e.g., React, Angular), libraries (e.g., Redux, Tailwind CSS), and frontend state management tools
👉Role3: AI Engineer/ AI Developer:
- Design and deploy agentic AI (multi-agent, planner-executor, RAG-enabled) aligned to client workflows and compliance.
- Prototype, fine-tune, and validate LLM/multimodal agents for end-to-end automation and decision support.
- Build orchestration, state management, connectors, and tool-integration layers for reliable agent actions.
- Develop next‑generation agentic AI systems and intelligent virtual agents.
- Productionize agents with latency/cost optimization, autoscaling, observability, CI/CD on hybrid-cloud (OpenShift/Azure/AWS).
- Implement safety, alignment, explainability, audit trails, intent validation, and failure-mode handling.
- Lead client scoping, co-creation workshops, technical demos, and iterative customization to meet SLAs.
👉Role: Associate Date Engineer:
- Bachelor’s degree in computer science / information technology / data science / engineering.
- Strong fundamentals in programming and problem solving, with hands-on knowledge of Python and SQL.
- Understanding of data structures, algorithms, debugging and analytical thinking to solve real-world data problems.
- Strong knowledge in RDBMS concepts including tables, keys, indexing, normalization, joins, transactions and query optimization basics.
- Working knowledge of SQL for data extraction, transformation and analysis.
- Understanding of big data / distributed processing fundamentals (Spark concepts).
- Exposure to Python libraries such as Pandas / Polars for data processing and analysis.
- Knowledge on using Git / GitHub / GitLab for source control and collaborative development.