Enterprise AI Hackathon: How to Get AI Agents Built in 6 Weeks

Most corporate hackathons end the same way: a polished slide deck, a winner’s trophy, and 37 ideas that never ship.

Chin Hin Group’s didn’t.

When we ran the Chin Hin AI Hackathon 2026 — formally, under the Digital Acceleration Program – Enterprise (DXP-E) — the brief was simple: build something that actually works. Not a prototype. Not a mockup. A working AI agent, live in staging, solving a real business problem inside one of Malaysia’s largest conglomerates.

150 final-year university students. 37 teams. 12 real business challenges pulled straight from Chin Hin Group’s operations. Six weeks to ship.

This is what enterprise AI implementation through university-industry collaboration actually looks like.

A large group of people pose on stage at the Chin Hin AI Hackathon 2026 finale, with winners holding prize signs and a digital event banner displayed in the background. Kabel Job Platform

The Problem With Most Corporate Hackathons

Companies run hackathons for the optics. They invite students, set a vague theme, hand out prizes, and call it innovation.

The students get a good LinkedIn post. The company gets a case study photo. Nobody gets a working product.

That’s not a hackathon. That’s a pitch competition with extra steps.

Large enterprises are sitting on hundreds of unresolved digital problems — processes that drain time, decisions made on gut feel, workflows still running on Excel. They have the complexity. What they often lack is the execution capacity to turn AI strategy into AI delivery.

Final-year university students, given the right structure and real constraints, can close that gap. And the enterprises that figure this out are building a talent pipeline and accelerating their digital transformation at the same time.

What DXP-Enterprise Actually Is

The Digital Acceleration Program (DXP) is Kabel’s outcome-driven execution model. In its standard form, it deploys Digital Agents — students and fresh graduates — into companies to execute defined digital projects over 10 weeks.

DXP-Enterprise (DXP-E) is the same philosophy, scaled to the complexity of a large company.

Instead of one team working on one project, multiple teams compete to build the best solution to a real enterprise challenge. The company gets multiple working AI agent prototypes. The students get exposure to real enterprise systems, real data constraints, and real business owners who need the thing to work.

For SMEs: the standard 10-week DXP works well. The scope is bounded, the team is small, and the problems are solvable with the right talent and structure.

For large enterprises: DXP-E is the model. The enterprise AI challenges are bigger, the stakes are higher, and the talent requirement shifts, which is why Chin Hin required final-year students specifically.

What AI Agents Students Built

Chin Hin didn’t hand out generic briefs. They opened 12 real business problems across their operations — each owned by an actual department head who needed it solved.

A few examples:

Challenge 1 – CRM AI Recommender

Chin Hin sells thousands of products across multiple divisions. When a new project lead arrives, the sales team has to manually match specs against the full product catalog. It’s slow. Opportunities get missed. The brief: build an AI agent that reads a project spec, cross-references the entire product database, and recommends a “Chin Hin Bundle” — or routes the lead to the right Business Unit automatically.

Challenge 2 – AI Key Account Manager

Chin Hin manages hundreds of dealers — from small hardware stores to major retail chains — but treated them all roughly the same. Relationship data was scattered across Excel sheets. The sales team only reached out when an order dropped. The brief: build an AI agent that dynamically segments every dealer into three tiers (Strategic, Profit, Growth) based on real-time spend, auto-prescribes the right nurturing activity per tier (“Client A is at risk — schedule a dinner”), and consolidates relationship health into a single loyalty dashboard.

People in a modern lecture hall view a large digital dashboard projected on a screen, showing analytics, metrics, and system notifications. Kabel Job Platform

Challenge 3 – Autonomous Credit Scoring Engine

Every new customer credit application meant hours of manual data entry — opening a CTOS report PDF, copying financial figures one by one into a scoring form. One typo could mean a wrong credit limit. The brief: build an AI agent that reads the PDF, extracts the right data points, and auto-populates the scorecard for instant decisions.

Challenge 4 – Intelligent Procurement Planning Agent

A procurement officer managing ten Excel sheets, guessing order quantities for warehouse sales. Too much stock ties up cash. Too little loses sales. The brief: build an AI model that analyzes historical data, predicts demand by SKU, and auto-generates a Purchase Request routed to the right approver.

Challenge 6 – AI Customer Success Guardian

When a customer called with a product issue, an agent picked up, listened, manually typed everything into the CRM, and then triaged. Skilled staff were spending hours on data entry and repeating the same FAQ answers. The brief: build an AI agent that listens to calls or reads chat transcripts, auto-creates a structured CRM ticket, guides customers through instant troubleshooting 24/7, and escalates cases where it detects frustration — before a problem becomes a complaint.

CH Hackathon Presentation

Challenge 9 – AI Pricing Strategist

Pricing decisions made on intuition, not data. Inconsistent quotes, lost margins, no way to see the impact of a price change before it goes out. The brief: build an AI engine that recommends dealer-specific pricing based on purchase history, stock levels, and demand — with a margin simulator and a plain-English rationale for every recommendation.

These weren’t case studies. These were live operational problems with department owners who needed working agentic AI solutions.

The Six-Week Enterprise Hackathon Structure

What made this work wasn’t the talent alone. It was the structure.

Week 1 — Problem Discovery and Requirement Gathering. Teams worked directly with Chin Hin business owners to understand the real problem, not the surface problem. Deliverable: a Problem Statement and User Requirement Report.

Week 2 — Solution Ideation and Architecture Design. Before writing a line of code, teams had to produce a Solution Architecture Design Report. No skipping straight to building.

Week 3 & 4 — Core Build in Microsoft AI Foundry. The infrastructure of choice was Microsoft’s AI Foundry platform. Teams were building real AI agents for business — not just prompt chains or chatbot wrappers. Deliverable at each stage: an MVP in staging.

Week 5 — UI/UX and Integration. Working systems connected to real interfaces. Deliverable: a live MVP.

Week 6 — Testing, Documentation, and Final Pitch. Not a slide pitch. A product submission — tested, documented, ready for evaluation.

Every week had a deliverable. There was no week where nothing shipped.

Why Final-Year Students

This wasn’t open to any student. Chin Hin required final-year university students.

That’s intentional.

Final-year students have completed the bulk of their technical training. They’re past the foundations. They understand system design, can read documentation, and are starting to think about how problems connect — not just how code runs.

More importantly, they’re making decisions about where to start their careers.

A final-year student who spends six weeks embedded in an enterprise AI project — working with real data, real business owners, real constraints — doesn’t just produce output. They form a view of the company from the inside.

That’s a recruitment advantage most companies never build.

The talent pipeline and the deliverable are the same investment.

What Enterprise Hackathon Looks Like at Scale

37 teams meant 37 different approaches to the same 12 problems.

Chin Hin’s business owners got to see multiple working solutions, compare architectures, and stress-test different approaches — all without a procurement cycle or a vendor contract.

7 finalist teams made it to the final pitch. Across those teams, Chin Hin walked away with tested MVPs, documented SOPs, and direct access to the students who built them.

The program was powered by Microsoft and Kabel, built on Microsoft Foundry — enterprise infrastructure, not hackathon-grade tooling.

Abel Saw, Group Chief Transformation Officer at Chin Hin: “This hackathon was purpose built to drive practical, business led outcomes. It reflects our shift from using AI as a tool to embedding it as a core capability.”

A person stands on stage with a microphone in front of a screen displaying the words "AI to win. Speed decides. Boldness defines." and the CHIN HIN logo. Kabel Job Platform

That’s the shift. Not AI bolted onto existing tools. Agentic AI embedded into operations — where the agent acts, decides, and routes without waiting for a human to trigger it.

The Enterprise AI Hackathon Model

If you’re running a large company with real digital debt and you want top talent, the DXP-Enterprise model gives you two things at once:

  • Real AI Agents were built. Not pitched. Built and tested. Problems that have sat in a backlog for months get multiple working AI agent prototypes in six weeks.
  • Top talent self-selects. The students who thrive in this environment — who work with business owners, navigate ambiguity, ship under weekly deadlines — are exactly the ones worth hiring. You observe how they execute before you make an offer.

That’s a more reliable hiring signal than any interview or management trainee programs.

For SMEs, the standard DXP 10-week program is the right fit. Scoped outcomes, matched Digital Agents, structured weekly execution. The complexity level is different. The model scales accordingly.

For enterprises running enterprise AI transformation at scale: DXP-E is the model. Bigger challenge, bigger return.

Run a Hackathon to Transform Your Digital Strategy

You don’t need a vague innovation initiative. You need a defined set of real problems, a structured program, and a university talent pool that’s ready to build.

The model exists. The infrastructure works. The students are in their final year right now.

Malaysia’s AI talent gap is real — 75% of employers report that the current pool lacks deep AI implementation skills. An enterprise hackathon program like DXP-E is one of the few models that develops that talent while delivering actual business output.

The question is whether your company has the problems ready, and whether you’re willing to let students solve them for real.

If you do, the output isn’t a presentation. It’s a working AI agent in staging by week six.

That’s the difference between a corporate hackathon and a DXP-Enterprise program.

Contact Kabel to run DXP-Enterprise Hackathon for your organisation.

Similar Posts