What is FoodSave?
the elevator pitch you should be able to give in 30 seconds
The problem: In India, around 68 million tonnes of food is wasted every year. Restaurants cook extra for peak hours, bakeries have unsold items by evening, hostel canteens prepare food in bulk that students don't finish.
Our solution: FoodSave is a platform where restaurants, bakeries, and canteens can list surplus food at discounted prices. Nearby users can discover these deals in real time, reserve what they want, and pick it up within a given time window.
Why it matters: Vendors recover costs instead of throwing food away. Users get affordable meals. Food waste goes down. Everyone wins.
The Core Flow
Key Design Philosophy
Vendor-first approach: In the initial phase, we prioritize vendor experience. Without enough vendors listing food, the platform has nothing to offer consumers. This thinking shaped a lot of our decisions like online-only payments (to protect vendors from no-shows), fast onboarding (listing within 5 mins), and quick QR scanning (vendors serve many customers).
Simple and maintainable: We went with a modular monolith instead of microservices. We chose proven tech (React, Node, PostgreSQL) instead of trendy frameworks. The goal is a system that a small team can maintain and grow over time.
Tech Stack
why we picked what we picked
| Layer | Technology | Why This? |
|---|---|---|
| Frontend | React + Next.js | Huge community in India. Easy to find devs. Next.js gives routing and SSR for free. |
| Backend | Node.js + Express | Same language (JavaScript) across frontend and backend. Whole team can work on anything. |
| Database | PostgreSQL | Our data is very relational (users have listings, listings have reservations). Rock solid and free. |
| Real-time | Socket.io | Built-in room support for targeted notifications. Works great with Express. |
| Containers | Docker + Docker Compose | One command to start everything. No "it works on my machine" problems. |
Why NOT Flutter / React Native?
For a prototype, a responsive web app is faster to build. No app store approval needed. Easier to demo in class. We can always wrap it in a mobile app later using a PWA or React Native.
Why NOT MongoDB?
Our data model is naturally relational. Users HAVE listings, listings HAVE reservations, reservations HAVE payments. PostgreSQL handles these relationships cleanly with foreign keys and transactions. MongoDB would need workarounds for things like preventing double-booking which PostgreSQL handles natively with row-level locking.
Functional Requirements
what the system actually does
Non-Functional Requirements
every NFR has a business reason, not just a random number
| # | NFR | Target | Why This Number? |
|---|---|---|---|
| 1 | Performance | Listings under 2s, search under 1s | Slow load during lunch rush = users leave |
| 2 | Availability | 99.9% during peak hours | Downtime = lost revenue for vendors |
| 3 | Scalability | 5000 concurrent users | Meal-time spikes need headroom |
| 4 | Real-time | New listings visible within 30s | Food has a short window, delays hurt both sides |
| 5 | Security | Encrypted payments, no location storage | Users need to trust the platform early on |
| 6 | Usability | Reserve in 3 taps | Complex flow = fewer conversions |
| 7 | QR Speed | Validated within 500ms | Vendors serve multiple customers quickly |
| 8 | Consistency | Sold-out reflects within 5s | Double reservations damage vendor trust |
| 9 | Onboarding | First listing within 5 mins | Friction in onboarding kills supply side growth |
Why 99.9% and not 99.99%?
99.99% means only 52 minutes of downtime per year. That's what Google and AWS promise. For a food surplus app that mainly runs during lunch and dinner hours, that's overkill and extremely hard to achieve. 99.9% allows about 8.7 hours per year which is still very good. The architecture is designed to scale toward higher availability as the system grows.
Architecturally Significant Requirements
Three NFRs directly shape how we built the system:
Real-time updates (NFR-4) drives the decision to use WebSockets instead of polling.
Availability (NFR-2) drives stateless servers, health checks, and auto-restart.
Scalability (NFR-3) drives JWT auth (no server sessions), horizontal scaling, and DB read replicas.
Subsystems
7 modules, each with a clear job
Why is Notifications a separate subsystem?
Initially we thought about merging it with Listing Management since new-listing notifications were the main use case. But then we realized that notifications need to come from reservations (order status), payments (refund confirmations), and potentially the dashboard (weekly summaries). Keeping it standalone means any module can plug into it without changing other parts. This was a deliberate design decision.
Stakeholder Identification (IEEE 42010)
who cares about the system and what they worry about
| Stakeholder | Concerns | Viewpoint | Views (Subsystems) |
|---|---|---|---|
| Vendor | Can I list food quickly? Will I get paid? Can I trust the platform? | Operational | Listing Mgmt, Payments, Dashboard |
| Consumer | Can I find food nearby fast? Is payment safe? Will the food be there? | Functional | Discovery, Reservation, Payments |
| Developer | Is it easy to maintain? Can I add features without breaking things? | Implementation | All subsystems, deployment |
| Platform Admin | Are vendors behaving? Any fraud or abuse? | Compliance | User Mgmt, Notifications, flagging |
| Payment Gateway | Are API calls well formed? Proper error handling? | Integration | Payments subsystem |
Key Point: Self-Serve Onboarding
Vendor onboarding is self-serve with automated guardrails. No manual admin approval. This was a product decision driven by our goal of getting as many vendors quickly. The admin role exists but is reactive (reviewing flagged vendors and reported listings) rather than approving every signup.
Architecture Decision Records
5 big decisions, each with clear reasoning
Context: Team of 5 starting from scratch. Need to move fast.
Decision: Build as a modular monolith. Each subsystem is a well-defined internal module with clear interfaces. Modules don't directly access each other's data.
Why not microservices? Way too much infrastructure for 5 people. We'd need 7+ services, 7+ databases, a message broker, API gateway, service mesh. Overkill for 5000 users.
Trade-off: Single point of failure, but mitigated by health checks and multiple instances. Scaling means scaling the whole app, not individual pieces.
Key argument: Clean module boundaries mean we CAN extract services later if needed. We get the discipline of microservices without the operational cost.
Context: New listings must appear within 30 seconds.
Decision: Use WebSockets (socket.io) to push updates to connected consumers.
Why not polling? Polling every few seconds wastes bandwidth and doesn't guarantee 30-second freshness. WebSockets push instantly.
Trade-off: Persistent connections need management during peak hours. But socket.io handles reconnection and rooms out of the box.
Context: Need maximum vendor signups in the early phase.
Decision: Self-serve onboarding. Guardrails include phone OTP, business name required, listing must have all fields. Low-rated vendors get auto-flagged.
Why not manual approval? Creates a bottleneck. Every vendor waits for admin. Kills the onboarding NFR (5 minutes).
Trade-off: Some bad actors may get through initially, but the flagging system limits damage.
Context: Need to protect vendors from no-shows.
Decision: All payments online. Consumer pays at reservation. Vendors get payouts every 2 weeks. No COD initially.
Why no COD? If consumers can book without paying, they might never show up. This destroys vendor trust. Online-only means the consumer has skin in the game.
Trade-off: Some consumers without UPI access are excluded. But UPI penetration in India is very high, so this is acceptable.
Context: Need a way for vendors to verify pickup. Originally we planned per-reservation signed QR codes.
Decision: Each consumer gets one permanent QR tied to their account. Vendor scans it, system shows all active orders from that consumer. Vendor picks which order to fulfill.
Why not per-reservation QR? Unnecessary complexity. Crypto signing, expiry handling, generation on every reservation. The static approach is simpler and works just as well.
Trade-off: If someone's QR leaks, no harm. It only shows orders to the specific vendor scanning it. And it requires a paid reservation to be useful.
Architectural Tactics and Patterns
how we achieve the NFRs in practice
5 Tactics
/health endpoint. Load balancer stops routing to dead instances. New ones spin up automatically. This is how we hit 99.9%.2 Design Patterns
When a listing is created, the Listing module doesn't call Notifications, Discovery, or Dashboard directly. It just emits a listing:created event. All interested modules listen and react independently.
Where in code: server/shared/events/eventBus.js
Why it matters: Adding a new feature (like email notifications) is just adding a new listener. Zero changes to existing modules.
Each module has its own Repository file (userRepo.js, listingRepo.js, etc.) that handles all database queries. No module directly accesses another module's tables.
Where in code: Each server/modules/*/ folder has a *Repo.js file.
Why it matters: If we switch databases for a specific module later, we only change that one repo file. Also makes testing easier since we can mock the repo.
Demo Flow
step by step guide to demo the app during evaluation
docker-compose up --build and then docker-compose exec server node seed.js to load demo data. The app will be at http://localhost:3000.
Step 1: Show Vendor Flow
Login as vendor: ravi@demo.com / password123
Show the dashboard with stats (active listings, completed orders, food saved).
Go to my listings to see existing food items.
Click post new listing and create one. Show the guardrails (all fields required, discounted price must be less than original).
Step 2: Show Consumer Flow
Logout. Login as consumer: priya@demo.com / password123
Go to browse. Show listings with prices and pickup windows.
Click reserve now on an item. Show the success message.
Go to my orders to see the reservation.
Go to my qr to show the static QR code.
Step 3: Show Pickup Flow
Logout. Login as vendor again.
Go to scan qr. Paste the consumer's QR token (from the seed script output).
Show the consumer's active orders appearing.
Click confirm pickup.
Go to dashboard to show updated stats.
Step 4: Talk About Architecture
Show the server/modules/ folder structure. Point out how each module has its own repo, controller, and routes.
Open server/shared/events/eventBus.js and explain the Observer pattern.
Open server/modules/reservations/reservationRepo.js and show the transaction with SELECT FOR UPDATE that prevents double-booking.
Code Structure
where everything lives in the codebase
foodsave/ ├── client/ # nextjs frontend │ └── src/ │ ├── app/ # pages (home, browse, vendor/, consumer/) │ ├── components/ # navbar │ ├── contexts/ # auth state management │ └── lib/ # api helper ├── server/ │ ├── modules/ # the 7 subsystems │ │ ├── users/ # userRepo.js, userController.js, routes.js │ │ ├── listings/ # listingRepo.js, listingController.js, routes.js │ │ ├── discovery/ # discoveryRepo.js, discoveryController.js, routes.js │ │ ├── reservations/ # reservationRepo.js, reservationController.js, routes.js │ │ ├── payments/ # paymentRepo.js, paymentController.js, routes.js │ │ ├── notifications/ # index.js (repo + controller + listeners), routes.js │ │ └── dashboard/ # dashboardController.js, routes.js │ ├── shared/ │ │ ├── db/ # connection pool + migrations │ │ ├── events/ # eventBus.js (observer pattern) │ │ └── middleware/ # jwt auth + role authorization │ ├── index.js # main entry, wires everything together │ └── seed.js # demo data ├── docker-compose.yml └── README.md
How Modules Communicate
Rule: Modules never import another module's repository directly (with one documented exception in reservations).
Events: When something happens (listing created, reservation made, order fulfilled), the module emits an event. Other modules listen and react.
Example flow: Consumer reserves food → Reservation module emits reservation:created → Payment module hears it and creates a payment record → Notification module hears it and notifies the vendor
The One Exception
The Reservation controller imports listingRepo (to check availability) and userRepo (to look up consumers by QR token). This is documented in the code with a comment explaining that in a future microservices setup, these would become API calls instead of direct imports.
Architecture Analysis
monolith vs microservices with actual numbers
Response Time Comparison
Tracing the "reserve food" flow:
| Step | Monolith | Microservices |
|---|---|---|
| Check listing availability | ~5-10ms (in-process call) | ~15-30ms (HTTP to Listing service) |
| Create reservation | ~10-20ms (single DB transaction) | ~10-20ms (own DB) |
| Emit/publish event | ~0ms (in-memory) | ~5-15ms (message broker) |
| Process payment | ~5-10ms (listener) | ~10-20ms (separate service) |
| Send notification | ~5-10ms (listener) | ~10-20ms (separate service) |
| Total | 25-50ms | 55-115ms |
Why: In-process function calls and in-memory events have near-zero overhead vs network calls between services.
Scalability Comparison
| Aspect | Monolith (3 instances) | Microservices (7 services) |
|---|---|---|
| Containers | 3 app + 2 DB | 7+ services + 7+ DBs + broker + gateway |
| Deploy complexity | One artifact | 7+ independent deploys |
| Team needed | 1-2 devs | 3-5 devs minimum |
| Infra cost | 2-3 VMs | 8-12 VMs |
Bottom Line
At 5000 users, monolith wins on simplicity, speed, and cost. Microservices win when you have 500K+ users and need to scale specific services independently. Our clean module boundaries make migration possible later without a rewrite.
Likely Evaluation Questions
practice these before the evaluation
SELECT FOR UPDATE row-level locking. When two consumers try to reserve the last item at the same time, the database locks the listing row. Only one transaction succeeds, the other gets rolled back with a clear error message. This happens in reservationRepo.js.eventBus.js) that implements the observer pattern. When something important happens, like a listing is created or a reservation is made, the module emits an event instead of calling other modules directly. Payments, Notifications, and Dashboard listen for these events and react independently. This keeps modules loosely coupled. Adding a new feature like email notifications is just adding a new listener, zero changes to existing code.reservation:created or listing:created). When it hears an event, it creates a notification record in the database and pushes a real-time alert via WebSocket to the relevant user. Users are organized in socket.io rooms by their user ID so we can target specific users.Who Did What
know your own contribution well for the evaluation