Computer Vision Engineering Track
A comprehensive learning path designed for developers ready to build real-world computer vision systems. We focus on practical skills through project-based work—no fluff, just code and problem-solving.
Request InformationWhat Drives Our Teaching Approach
We've spent years working on computer vision projects—everything from object detection pipelines to real-time tracking systems. Our program reflects how we actually work.
Hands-On From Day One
You start writing code in the first session. Theory matters, but we introduce concepts when you need them to solve actual problems—not before.
Real Dataset Experience
Clean tutorial datasets don't prepare you for messy reality. We use real-world data with all its quirks—poor lighting, occlusions, edge cases that force creative solutions.
Portfolio-Worthy Projects
Every module ends with a project you'd actually show to potential clients or employers. Not toy examples—systems that process video, handle edge detection, or track multiple objects.
Learning Pathways That Actually Stick
We've watched students struggle with traditional approaches. So we redesigned the path based on what actually works—building skills progressively while keeping momentum.
Getting Started: Mika's Story
Mika came in knowing Python but felt lost with OpenCV documentation. First three weeks, she built a basic face detector—nothing fancy, just functional. The breakthrough came when debugging why her model failed in low light. That's when concepts clicked.
Deepening Skills: Henrik's Challenge
Henrik wanted to track products on a conveyor belt. His first attempt? Terrible accuracy. We spent hours analyzing frame rates, lighting conditions, motion blur. By month two, his system worked reliably—he learned more from those failures than any tutorial.
Real Application: Astrid's Project
Astrid tackled pedestrian detection for her capstone. Not a classroom exercise—actual footage from Taipei streets. She dealt with occlusions, varying lighting, camera angles. Three months of iteration resulted in a system she now showcases to potential employers.
Where You'll Be: Confident Building
By program end, you'll approach new computer vision challenges methodically. You'll know when to use pre-trained models versus custom solutions. You'll debug performance issues efficiently. Most importantly—you'll have working projects demonstrating your capability.
How We Actually Run Sessions
Our sessions blend instruction with active coding. You're not watching slides for two hours—you're implementing algorithms, testing them, seeing what breaks.
We meet twice weekly starting September 2025. Each session runs three hours: first hour covers concepts, next two hours you code while instructors circulate. When someone hits an interesting bug, we pause and troubleshoot as a group.
"The best learning happened when my object detector completely failed on a specific video. We spent 40 minutes as a class figuring out why—turned out the preprocessing pipeline had a subtle bug. I'll never forget that lesson."
Between sessions, you work on assignments using real datasets. Get stuck? Our chat stays active—instructors and fellow students help debug. This collaborative environment mirrors actual development teams.
Ludvig Naesgaard
Built tracking systems for industrial automation. Specializes in real-time processing and edge deployment challenges.
Siiri Kauppinen
Focuses on dataset preparation and model optimization. Previously worked on medical imaging analysis projects.
Brynhild Sørensen
Expert in semantic segmentation and scene understanding. Helps students transition from detection to more complex tasks.
What You'll Actually Learn
The curriculum covers core computer vision techniques through progressive projects. Each module builds on previous work.
- Image preprocessing and filtering techniques for various lighting conditions
- Object detection implementation using both classical and deep learning approaches
- Real-time video processing and frame analysis optimization
- Multi-object tracking across video sequences with occlusion handling
- Model training with custom datasets and performance tuning
- Deployment considerations for edge devices and production systems
Program Details
The program runs 16 weeks starting September 2025. Classes meet Tuesday and Thursday evenings, 6:30-9:30 PM Taiwan time.
Prerequisites
Solid Python skills required. Familiarity with NumPy helps. No prior computer vision experience needed—we start from foundations.
Time Commitment
Expect 6 hours weekly in class plus 8-10 hours for assignments and project work. This isn't a casual course—it requires dedicated effort.
What You'll Build
Three major projects: real-time object detector, multi-camera tracking system, and a custom application of your choice. All projects go in your portfolio.