HowWeBuiltanIoTGPSTrackingSystemfor30,000+Vehicles
Pixytan GPS platform tracking 30,000+ vehicles in real time with geofencing alerts, driver behavior scoring, fuel monitoring, and predictive maintenan
The Challenge
Fleet operators managing 30,000+ vehicles had no real-time visibility into their assets. Drivers called in their locations manually. Dispatch happened through radio and phone calls. When a vehicle broke down on a highway, the operations center didn't know about it until the driver called — sometimes hours later. Fuel theft was rampant. Route deviations went unnoticed. There was zero data on driver behavior, so accident rates stayed high.
Existing GPS solutions in the market fell short in three areas. First, they relied on third-party hardware with locked firmware — no way to add custom sensors or OBD-II integration for engine diagnostics. Second, their dashboards were built for small fleets (50-100 vehicles) and collapsed under 30,000+ concurrent connections. Third, alerts arrived via SMS with 30-60 second delays, which is an eternity when a vehicle crosses a geofence boundary and you need to respond immediately.
The requirements went beyond basic tracking. Fleet managers needed real-time location updates every 10 seconds, geofencing with instant alerts (under 2 seconds), driver behavior scoring based on acceleration/braking/cornering patterns, fuel monitoring to detect theft and inefficiency, predictive maintenance alerts based on engine data, and a mobile app for fleet managers who spent most of their day away from desks. All of this had to work reliably at scale — 30,000 devices sending data simultaneously.
The Solution
Geminate Solutions built Pixytan GPS as a full-stack IoT platform: custom hardware, mobile apps, backend infrastructure, and web dashboard. The GPS tracking device was designed in-house with OBD-II port integration, accelerometer for driver behavior detection, and a SIM-based cellular connection for data transmission. Each device reported its position every 10 seconds via MQTT — a lightweight messaging protocol purpose-built for IoT devices with limited bandwidth.
The backend ran on Node.js with MQTT broker (Mosquitto) handling device connections. AWS IoT Core managed device registration, authentication, and message routing at scale. Location data flowed into TimescaleDB — a PostgreSQL extension optimized for time-series data that compressed months of GPS coordinates into a fraction of the storage a regular database would need. A Redis layer cached the latest position for every vehicle, so when a fleet manager opened the dashboard, 30,000 vehicle positions loaded in under a second.
Two client-facing interfaces served different use cases. The React web dashboard gave operations centers a bird's-eye view: vehicle clusters on a map, fleet-wide analytics, geofence configuration, and maintenance scheduling. The Flutter mobile app gave fleet managers on the move real-time tracking, push notification alerts, trip history, and driver scorecards. Both interfaces received real-time updates through WebSocket connections, with automatic reconnection when network connectivity dropped.
Flutter, Node.js, MQTT (Mosquitto), PostgreSQL, TimescaleDB, Redis, AWS IoT Core, React dashboard, custom GPS hardware with OBD-II, WebSocket, Firebase Cloud Messaging
Architecture Decisions
MQTT was chosen over HTTP for device communication because the difference at scale is staggering. An HTTP request from 30,000 devices every 10 seconds generates 180,000 TCP connections per minute. MQTT maintains persistent connections — each device opens one connection and keeps it alive with tiny keepalive packets (2 bytes). The protocol overhead dropped from ~800 bytes per HTTP request to ~4 bytes per MQTT publish. At 30,000 devices, that's the difference between 144MB and 720KB of overhead per minute.
TimescaleDB was a decisive choice over regular PostgreSQL or MongoDB for location storage. GPS data is inherently time-series: latitude, longitude, speed, heading, timestamp. TimescaleDB's hypertable partitioning automatically split data into time-based chunks, and its native compression reduced storage by 90% for data older than 7 days. A query like "show me all positions for vehicle X in the last 24 hours" ran in 15ms, even with billions of rows in the table. With regular PostgreSQL, the same query took 3-4 seconds.
The geofencing engine used a server-side approach rather than on-device processing. Each incoming GPS coordinate was checked against all active geofences for that vehicle's fleet. With potentially thousands of geofences per fleet, a naive point-in-polygon check would've been too slow. The team implemented a spatial index using R-tree structures, reducing geofence checks from O(n) to O(log n). A single coordinate check against 5,000 geofences completed in under 1 millisecond.
Custom hardware added 3 months to the timeline, but it was the right call. Off-the-shelf GPS trackers couldn't read OBD-II engine data (RPM, coolant temperature, fuel level, error codes). The custom device combined GPS, GSM, accelerometer, and OBD-II reader in a single unit. This gave the platform data that competitors using generic trackers simply didn't have. Driver behavior scoring, fuel theft detection, and predictive maintenance all depended on sensor data that only custom hardware could provide.
Key Features Built
Real-Time Vehicle Tracking
Every vehicle reported its GPS position every 10 seconds. The React dashboard displayed all vehicles on a map with color-coded status indicators: green (moving), yellow (idle), red (stopped), gray (offline). Fleet managers could click any vehicle to see live speed, heading, nearest address, and current driver. The map clustered vehicles at low zoom levels and expanded to individual markers as users zoomed in — handling 30,000 markers without browser lag. Historical playback let managers replay any vehicle's route for the past 90 days, with speed-adjustable animation.
Geofencing and Instant Alerts
Fleet managers drew geofences on the map — circles, rectangles, or custom polygons. When a vehicle entered or exited a geofence, the system triggered an alert within 2 seconds. Alerts went through three channels simultaneously: push notification to the Flutter app, email to the fleet manager, and a dashboard popup. Common use cases included depot entry/exit tracking, restricted zone monitoring, route corridor enforcement, and customer site arrival notifications. The system processed geofence checks for 30,000 vehicles against thousands of active fences without any noticeable delay.
Driver Behavior Scoring
The accelerometer in the custom GPS device measured three axes of motion continuously. The backend analyzed this data to detect harsh braking (deceleration over 0.4G), rapid acceleration, sharp cornering, and speeding. Each trip received a score from 0 to 100 based on the frequency and severity of these events. Drivers saw their own scores in a simplified view. Fleet managers got a ranking of all drivers, trend charts showing improvement or decline over time, and automatic flagging of drivers scoring below 60. Fleets that implemented driver coaching based on these scores saw accident rates drop 40% within 6 months.
Fuel Monitoring and Theft Detection
The OBD-II connection provided fuel level readings every 30 seconds. The system established normal fuel consumption patterns per vehicle based on distance traveled, engine load, and driving style. When fuel level dropped faster than expected (indicating a possible siphoning event) or a refueling event didn't match any authorized fuel stop, the system flagged it immediately. Fleet managers received fuel theft alerts with timestamps, locations, and estimated volumes. Customers reported 25% fuel cost reduction within the first quarter — a combination of theft prevention and driving efficiency improvements from behavior scoring.
Predictive Maintenance Alerts
OBD-II data included engine RPM, coolant temperature, battery voltage, and diagnostic trouble codes (DTCs). The system tracked these readings over time and flagged anomalies: coolant temperature trending upward over a week, battery voltage dropping below threshold, or recurring DTCs that indicated developing mechanical issues. Maintenance alerts triggered before breakdowns happened — not after. Fleet managers got a maintenance calendar showing recommended service windows per vehicle. The predictive approach reduced roadside breakdowns by 35% compared to the client's previous fixed-interval maintenance schedule.
Fleet Analytics Dashboard
The React dashboard aggregated fleet-wide metrics: total distance traveled per day, fuel consumption trends, average driver scores, idle time percentages, and maintenance costs. Custom reports could be generated for any date range and exported as PDF or CSV. Comparison views let managers benchmark one vehicle against fleet averages or one driver against peers. The dashboard loaded in under 2 seconds even for fleets with 5,000+ vehicles, thanks to pre-aggregated metrics in TimescaleDB continuous aggregates and Redis caching for frequently accessed views.
Mobile Fleet Manager App (Flutter)
Fleet managers needed tracking capabilities outside the office. The Flutter app provided real-time vehicle locations on a map, push notification alerts for geofence events and driver violations, trip history with route playback, and driver scorecards. A quick-action button let managers send messages to specific drivers through the app. Offline mode cached the last known positions of all vehicles, so managers could view fleet status even without connectivity. The app maintained 4.5+ ratings on both iOS and Android app stores.
The Results
| Metric | Result | Context |
|---|---|---|
| Vehicles Tracked | 30,000+ | Reporting positions every 10 seconds |
| Fuel Cost Reduction | 25% | Theft prevention + driving efficiency combined |
| Accident Reduction | 40% fewer | Through driver behavior scoring and coaching |
| System Uptime | 99.9% | Across 8 months of production operation |
| Alert Delivery | Under 2 seconds | From geofence breach to push notification |
| Roadside Breakdowns | 35% reduction | Predictive maintenance vs. fixed-interval schedule |
How This Compares to Alternatives
Custom IoT platform vs AWS IoT Core — which is cheaper at scale? It depends on device count. Cloud platforms charge per message and per device. Custom platforms have higher upfront cost but dramatically lower marginal cost as you grow.
| Approach | Cost | Timeline | Customization | Best For |
|---|---|---|---|---|
| Custom IoT Platform | $80K–$180K upfront | 4–7 months | Full control | 5,000+ devices with custom protocols or hardware |
| AWS IoT Core | $5–$10/device/mo at scale | 2–6 weeks | Moderate (AWS ecosystem lock-in) | Prototypes and fleets under 1,000 devices |
| Azure IoT Hub | $5–$12/device/mo at scale | 2–6 weeks | Moderate (Azure ecosystem) | Enterprises already on Microsoft stack |
| Particle.io | $0.35–$0.69/device/mo + hardware | 1–2 weeks | Low (their hardware + cloud) | Quick prototyping, small-scale deployments |
When should you build custom IoT vs use a cloud platform? At 500 devices sending data every 5 seconds, AWS IoT Core costs roughly $30K–$60K/year in message fees alone. A custom MQTT broker on dedicated infrastructure handles the same load for $3K–$5K/year. The crossover point is usually around 1,000 devices — below that, cloud wins on speed. Above it, custom wins on cost.
The time-series data pipeline we built here has applications far beyond GPS tracking. The same pattern powers manufacturing sensor data systems, energy smart meter platforms, and agricultural monitoring networks globally. If you're evaluating whether to outsource an IoT build, look for a team that's shipped hardware-software integrated products before. Pure software teams underestimate firmware challenges. That gap between simulation and real-world deployment is where most IoT projects fail.
Lessons Learned
IoT projects have a hardware-software feedback loop that pure software projects don't. The first batch of GPS devices had an antenna placement issue that reduced signal quality inside metal truck cabins. The software team couldn't compensate — no amount of clever algorithm work fixes a hardware problem. Fixing it required a hardware revision, which added 3 weeks. The lesson: prototype hardware in real-world conditions early. Lab testing doesn't replicate a metal truck cabin at 80 km/h.
MQTT quality-of-service levels matter at scale. QoS 0 (fire and forget) lost about 0.1% of messages under normal conditions — acceptable for position updates since the next one arrives in 10 seconds. But for geofence alerts and maintenance warnings, losing even one message was unacceptable. The team used QoS 1 (at least once delivery) for alerts and QoS 0 for regular position updates. This split approach kept bandwidth usage low while guaranteeing that critical alerts always arrived.
Data volume from 30,000 devices generating records every 10 seconds is enormous — roughly 260 million location records per day. Standard database maintenance practices (vacuum, index rebuilds) that work for typical applications failed at this scale. TimescaleDB's automatic chunk management and compression policies were non-negotiable. Without them, storage costs would've been 10x higher and query performance would've degraded within weeks.
Driver behavior scoring only works if drivers trust it. The first version flagged every hard braking event, including legitimate emergency stops. Drivers complained the scores were unfair. The team at Geminate added contextual filtering — braking events near traffic signals, stop signs, and known hazard zones were excluded from scoring. Harsh braking on an open highway counted. Braking at a red light didn't. After this adjustment, driver adoption jumped from 40% to 85%, and the fleet managers started seeing real improvements in driving patterns.
Frequently Asked Questions
How long does it take to build a GPS fleet tracking platform?
The Pixytan GPS platform took 8 months for the full system including custom hardware, Flutter mobile app, Node.js backend, and React dashboard. An MVP with basic tracking and geofencing can launch in 12-16 weeks. The hardware development (custom GPS device with OBD-II integration) added 3 months that a software-only platform wouldn't need.
How much does a fleet management system cost to build?
A comprehensive fleet tracking platform like Pixytan costs $100,000-$160,000 for the full stack including hardware design, mobile apps, backend, and dashboard. Software-only solutions (using off-the-shelf GPS hardware) cost $60,000-$90,000. Monthly infrastructure for 30,000+ vehicles runs $1,500-$2,500 depending on data frequency and retention policies.
What technology powers real-time tracking of 30,000+ vehicles?
MQTT protocol handles device-to-server communication with minimal bandwidth overhead. TimescaleDB (PostgreSQL extension for time-series data) stores location history efficiently. AWS IoT Core manages device connections at scale. The Flutter mobile app displays real-time positions on maps using WebSocket connections to the Node.js backend. Redis caches the latest position for each vehicle for sub-second dashboard loads.
How does driver behavior scoring work in this system?
The GPS hardware measures acceleration, braking, cornering G-forces, and speed. The backend scores each trip on a 0-100 scale based on harsh braking events (deceleration over 0.4G), rapid acceleration, sharp turns, and speeding instances. Fleet managers see driver rankings, and the system flags drivers scoring below 60 for coaching. Fleets using driver scoring reduced accident rates by 40% within 6 months.
Can Geminate build a similar fleet tracking system for our business?
Yes. The MQTT architecture, real-time tracking pipeline, and analytics engine from Pixytan are directly reusable. Geminate Solutions has delivered 50+ products for clients worldwide. A fleet management platform typically costs $60,000-$160,000 depending on hardware requirements and launches in 12-16 weeks for software-only solutions. Visit geminatesolutions.com/get-started for a free assessment.
Is it worth building custom IoT tracking instead of using AWS IoT Core alone?
AWS IoT Core handles connections, but you still need custom firmware, data pipelines, and dashboards. Healthcare patient device monitoring, eCommerce warehouse tracking, and food delivery cold chain systems all require business logic AWS doesn't provide. The managed service covers maybe 20% of what a production IoT platform needs.
What are the hidden costs of IoT platform development?
SIM card data plans ($3-$8/device/month), hardware lifecycle replacements every 3-4 years, and SaaS-level recurring hosting for time-series databases. Logistics companies underestimate device maintenance costs. Budget 15-20% of the initial build annually for firmware updates, hardware swaps, and infrastructure scaling.
When does it make sense to invest in custom IoT vs cloud platforms?
When you need data ownership or industry-specific logic. Startups building IoT MVPs can start with cloud, but enterprise clients with compliance requirements need custom. EdTech campus IoT, marketplace multi-vendor device management, and fleet tracking at 100+ devices all hit the point where cloud-only costs more than owning the stack.
How do you choose a company to build IoT software?
Demand proof of hardware-software integration experience. Ask about MQTT at scale, firmware OTA updates, and real-time data pipelines. A team with fleet tracking, healthcare device monitoring, and manufacturing automation projects has the cross-industry depth you need. Avoid agencies that only do mobile apps and treat IoT as a side project.
Investment Breakdown and ROI
Total project investment: $100,000-$160,000 for the full stack — custom hardware, mobile apps, backend, and dashboard. The hardware design and manufacturing added roughly $40K-$60K to what a software-only build would cost. Monthly operational costs run $1,500-$2,500 for hosting, MQTT broker, TimescaleDB storage, and AWS IoT Core. Budget another $500-$1,000 per month for firmware updates and ongoing maintenance of the device fleet.
The return on investment scales with fleet size. Fleet operators save $200-$500 per vehicle per year through fuel theft prevention, driving efficiency improvements, and reduced maintenance costs from predictive alerts. A 100-vehicle fleet saves $20,000-$50,000 annually — that's a payback period of about 6 months on the software investment alone. At 30,000+ vehicles tracked, the platform generates substantial recurring revenue through per-vehicle monthly subscriptions. The ROI compounds because every new vehicle added costs almost nothing in marginal infrastructure.
Consider the cost of NOT building a custom platform. Off-the-shelf GPS solutions charge $15-$30 per vehicle per month with locked features and no OBD-II integration. At 30,000 vehicles, that's $450K-$900K per year in subscription fees — and you don't own the data or the platform. The custom investment of $160K pays for itself many times over compared to that ongoing pricing. Owning the infrastructure turned an operational cost into a revenue-generating product. That's the difference between renting a solution and building an asset worth real money.
Why Outsourcing Made Sense for This Project
IoT projects need firmware engineers, cloud architects, AND mobile developers — all at once. This rare combination costs $250,000+ per year to hire in-house, and finding people who've actually shipped IoT products at scale is even harder. Through Geminate's staff augmentation model, the client got a dedicated team of 5 specialists (2 Flutter, 2 Node.js, 1 embedded engineer) for $12,000-$18,000 per month. That's a cost-effective savings of 75%+ compared to local hiring.
The decision to outsource was driven by expertise, not just pricing. MQTT at 30,000-device scale, TimescaleDB for billions of time-series records, real-time geofencing with spatial indexing — these aren't skills you pick up from a tutorial. Geminate's remote team had production experience with IoT protocols and high-volume data pipelines from previous fleet tracking projects. Choosing a technology partner with proven IoT delivery capability meant avoiding the expensive trial-and-error phase that an in-house team would need.
The 8-month timeline demanded a team that could hit the ground running. Recruiting and onboarding 5 specialists locally would've consumed half that time in hiring alone. The offshore development model worked because Geminate operates as a full-service company, not a staffing agency that sends resumes. The dedicated developers collaborated daily with the client's operations team, understood fleet management workflows, and made architecture decisions based on real-world IoT experience. That's what separates staff augmentation from just hiring remote developers.
Related Resources
Want similar results?
The architecture, technology choices, and scaling patterns from this project are directly reusable for your fleet management business.