FoodSave Study Guide

Team 42 - Evaluation Prep

What is FoodSave?

the elevator pitch you should be able to give in 30 seconds

The problem: In India, around 68 million tonnes of food is wasted every year. Restaurants cook extra for peak hours, bakeries have unsold items by evening, hostel canteens prepare food in bulk that students don't finish.

Our solution: FoodSave is a platform where restaurants, bakeries, and canteens can list surplus food at discounted prices. Nearby users can discover these deals in real time, reserve what they want, and pick it up within a given time window.

Why it matters: Vendors recover costs instead of throwing food away. Users get affordable meals. Food waste goes down. Everyone wins.

The Core Flow

Vendor posts surplus food
Consumer discovers it nearby
Consumer reserves and pays online
Consumer picks up with QR
Vendor confirms pickup

Key Design Philosophy

Vendor-first approach: In the initial phase, we prioritize vendor experience. Without enough vendors listing food, the platform has nothing to offer consumers. This thinking shaped a lot of our decisions like online-only payments (to protect vendors from no-shows), fast onboarding (listing within 5 mins), and quick QR scanning (vendors serve many customers).

Simple and maintainable: We went with a modular monolith instead of microservices. We chose proven tech (React, Node, PostgreSQL) instead of trendy frameworks. The goal is a system that a small team can maintain and grow over time.

Tech Stack

why we picked what we picked

LayerTechnologyWhy This?
FrontendReact + Next.jsHuge community in India. Easy to find devs. Next.js gives routing and SSR for free.
BackendNode.js + ExpressSame language (JavaScript) across frontend and backend. Whole team can work on anything.
DatabasePostgreSQLOur data is very relational (users have listings, listings have reservations). Rock solid and free.
Real-timeSocket.ioBuilt-in room support for targeted notifications. Works great with Express.
ContainersDocker + Docker ComposeOne command to start everything. No "it works on my machine" problems.

Why NOT Flutter / React Native?

For a prototype, a responsive web app is faster to build. No app store approval needed. Easier to demo in class. We can always wrap it in a mobile app later using a PWA or React Native.

Why NOT MongoDB?

Our data model is naturally relational. Users HAVE listings, listings HAVE reservations, reservations HAVE payments. PostgreSQL handles these relationships cleanly with foreign keys and transactions. MongoDB would need workarounds for things like preventing double-booking which PostgreSQL handles natively with row-level locking.

Functional Requirements

what the system actually does

FR-1 Vendor Listings
Vendors can register and list surplus food with title, quantity, price, photo, and a pickup window. Listings can be updated or removed at any time.
FR-2 Discovery
Consumers can browse food listings on a map or list view with filters for distance, cuisine type, and price range.
FR-3 Reserve and Pickup
Consumers can reserve items and show their QR code at the vendor's location. The vendor scans the QR to confirm pickup.
FR-4 Online Payments
Consumers pay online at the time of reservation via PhonePe Business. Vendors receive payouts on a 2 week settlement cycle.
FR-5 Notifications
Push alerts are sent to consumers when nearby vendors post new surplus deals. Vendors get notified when someone reserves their food.
FR-6 Vendor Dashboard
Vendors can track food saved, revenue recovered, pending pickups, and customer ratings through a dashboard.

Non-Functional Requirements

every NFR has a business reason, not just a random number

#NFRTargetWhy This Number?
1PerformanceListings under 2s, search under 1sSlow load during lunch rush = users leave
2Availability99.9% during peak hoursDowntime = lost revenue for vendors
3Scalability5000 concurrent usersMeal-time spikes need headroom
4Real-timeNew listings visible within 30sFood has a short window, delays hurt both sides
5SecurityEncrypted payments, no location storageUsers need to trust the platform early on
6UsabilityReserve in 3 tapsComplex flow = fewer conversions
7QR SpeedValidated within 500msVendors serve multiple customers quickly
8ConsistencySold-out reflects within 5sDouble reservations damage vendor trust
9OnboardingFirst listing within 5 minsFriction in onboarding kills supply side growth

Why 99.9% and not 99.99%?

99.99% means only 52 minutes of downtime per year. That's what Google and AWS promise. For a food surplus app that mainly runs during lunch and dinner hours, that's overkill and extremely hard to achieve. 99.9% allows about 8.7 hours per year which is still very good. The architecture is designed to scale toward higher availability as the system grows.

Architecturally Significant Requirements

Three NFRs directly shape how we built the system:

Real-time updates (NFR-4) drives the decision to use WebSockets instead of polling.

Availability (NFR-2) drives stateless servers, health checks, and auto-restart.

Scalability (NFR-3) drives JWT auth (no server sessions), horizontal scaling, and DB read replicas.

Subsystems

7 modules, each with a clear job

1 User Management
Signup, login, profiles, and role management for vendors and consumers. Handles the guardrails like phone verification and business name requirement for vendors.
2 Listing Management
Creating, updating, and closing food listings with price, quantity, and pickup window. Enforces listing quality rules (all required fields must be present).
3 Discovery
Map and search experience for consumers. Filtering listings by distance, cuisine, and price. This is the consumer-facing side of listings.
4 Reservation & QR
Reserving food, looking up orders via static consumer QR code, and confirming pickup through vendor scan. Uses database transactions with row locking to prevent double-booking.
5 Payments
Online payment processing via PhonePe Business. Handles transaction records, refunds, and vendor settlement tracking. Listens to reservation events automatically.
6 Notifications
Push alerts for new deals, order confirmations, and pickup status. Intentionally standalone so any future module can trigger notifications without touching other parts.
7 Vendor Dashboard
Analytics for vendors: food items saved, revenue recovered, pending pickups, and customer ratings.

Why is Notifications a separate subsystem?

Initially we thought about merging it with Listing Management since new-listing notifications were the main use case. But then we realized that notifications need to come from reservations (order status), payments (refund confirmations), and potentially the dashboard (weekly summaries). Keeping it standalone means any module can plug into it without changing other parts. This was a deliberate design decision.

Stakeholder Identification (IEEE 42010)

who cares about the system and what they worry about

StakeholderConcernsViewpointViews (Subsystems)
VendorCan I list food quickly? Will I get paid? Can I trust the platform?OperationalListing Mgmt, Payments, Dashboard
ConsumerCan I find food nearby fast? Is payment safe? Will the food be there?FunctionalDiscovery, Reservation, Payments
DeveloperIs it easy to maintain? Can I add features without breaking things?ImplementationAll subsystems, deployment
Platform AdminAre vendors behaving? Any fraud or abuse?ComplianceUser Mgmt, Notifications, flagging
Payment GatewayAre API calls well formed? Proper error handling?IntegrationPayments subsystem

Key Point: Self-Serve Onboarding

Vendor onboarding is self-serve with automated guardrails. No manual admin approval. This was a product decision driven by our goal of getting as many vendors quickly. The admin role exists but is reactive (reviewing flagged vendors and reported listings) rather than approving every signup.

Architecture Decision Records

5 big decisions, each with clear reasoning

ADR 1  Modular Monolith over Microservices

Context: Team of 5 starting from scratch. Need to move fast.

Decision: Build as a modular monolith. Each subsystem is a well-defined internal module with clear interfaces. Modules don't directly access each other's data.

Why not microservices? Way too much infrastructure for 5 people. We'd need 7+ services, 7+ databases, a message broker, API gateway, service mesh. Overkill for 5000 users.

Trade-off: Single point of failure, but mitigated by health checks and multiple instances. Scaling means scaling the whole app, not individual pieces.

Key argument: Clean module boundaries mean we CAN extract services later if needed. We get the discipline of microservices without the operational cost.

ADR 2  WebSockets for Real-Time Updates

Context: New listings must appear within 30 seconds.

Decision: Use WebSockets (socket.io) to push updates to connected consumers.

Why not polling? Polling every few seconds wastes bandwidth and doesn't guarantee 30-second freshness. WebSockets push instantly.

Trade-off: Persistent connections need management during peak hours. But socket.io handles reconnection and rooms out of the box.

ADR 3  Self-Serve Vendor Onboarding with Guardrails

Context: Need maximum vendor signups in the early phase.

Decision: Self-serve onboarding. Guardrails include phone OTP, business name required, listing must have all fields. Low-rated vendors get auto-flagged.

Why not manual approval? Creates a bottleneck. Every vendor waits for admin. Kills the onboarding NFR (5 minutes).

Trade-off: Some bad actors may get through initially, but the flagging system limits damage.

ADR 4  Online-Only Payments via PhonePe

Context: Need to protect vendors from no-shows.

Decision: All payments online. Consumer pays at reservation. Vendors get payouts every 2 weeks. No COD initially.

Why no COD? If consumers can book without paying, they might never show up. This destroys vendor trust. Online-only means the consumer has skin in the game.

Trade-off: Some consumers without UPI access are excluded. But UPI penetration in India is very high, so this is acceptable.

ADR 5  Static Consumer QR Code

Context: Need a way for vendors to verify pickup. Originally we planned per-reservation signed QR codes.

Decision: Each consumer gets one permanent QR tied to their account. Vendor scans it, system shows all active orders from that consumer. Vendor picks which order to fulfill.

Why not per-reservation QR? Unnecessary complexity. Crypto signing, expiry handling, generation on every reservation. The static approach is simpler and works just as well.

Trade-off: If someone's QR leaks, no harm. It only shows orders to the specific vendor scanning it. And it requires a paid reservation to be useful.

Architectural Tactics and Patterns

how we achieve the NFRs in practice

5 Tactics

Scalability Horizontal Scaling
Multiple monolith instances behind a load balancer. Stateless servers (JWT auth, no sessions) so any instance handles any request. Add instances during peak hours.
Performance Database Read Replicas
Read-heavy ops (browsing, search) go to read replicas. Writes (new listings, reservations) go to the primary. Most traffic is consumers browsing, not vendors posting.
Availability Health Checks + Auto-Restart
Each instance has a /health endpoint. Load balancer stops routing to dead instances. New ones spin up automatically. This is how we hit 99.9%.
Real-time WebSocket with Geo-Filtering
New listings only push to consumers within a radius of the vendor. Not broadcast to everyone. Reduces unnecessary traffic as user base grows.
Security Rate Limiting on Vendor APIs
Vendor endpoints are rate-limited. No one can spam 100 listings in a minute. Protects platform quality and server resources.

2 Design Patterns

Observer Pattern (Event Bus)

When a listing is created, the Listing module doesn't call Notifications, Discovery, or Dashboard directly. It just emits a listing:created event. All interested modules listen and react independently.

Where in code: server/shared/events/eventBus.js

Why it matters: Adding a new feature (like email notifications) is just adding a new listener. Zero changes to existing modules.

Repository Pattern (Data Access)

Each module has its own Repository file (userRepo.js, listingRepo.js, etc.) that handles all database queries. No module directly accesses another module's tables.

Where in code: Each server/modules/*/ folder has a *Repo.js file.

Why it matters: If we switch databases for a specific module later, we only change that one repo file. Also makes testing easier since we can mock the repo.

Demo Flow

step by step guide to demo the app during evaluation

Before the demo: Make sure Docker is running. Run docker-compose up --build and then docker-compose exec server node seed.js to load demo data. The app will be at http://localhost:3000.

Step 1: Show Vendor Flow

Login as vendor: ravi@demo.com / password123

Show the dashboard with stats (active listings, completed orders, food saved).

Go to my listings to see existing food items.

Click post new listing and create one. Show the guardrails (all fields required, discounted price must be less than original).

Step 2: Show Consumer Flow

Logout. Login as consumer: priya@demo.com / password123

Go to browse. Show listings with prices and pickup windows.

Click reserve now on an item. Show the success message.

Go to my orders to see the reservation.

Go to my qr to show the static QR code.

Step 3: Show Pickup Flow

Logout. Login as vendor again.

Go to scan qr. Paste the consumer's QR token (from the seed script output).

Show the consumer's active orders appearing.

Click confirm pickup.

Go to dashboard to show updated stats.

Step 4: Talk About Architecture

Show the server/modules/ folder structure. Point out how each module has its own repo, controller, and routes.

Open server/shared/events/eventBus.js and explain the Observer pattern.

Open server/modules/reservations/reservationRepo.js and show the transaction with SELECT FOR UPDATE that prevents double-booking.

Code Structure

where everything lives in the codebase

foodsave/
├── client/                    # nextjs frontend
│   └── src/
│       ├── app/               # pages (home, browse, vendor/, consumer/)
│       ├── components/        # navbar
│       ├── contexts/          # auth state management
│       └── lib/               # api helper
├── server/
│   ├── modules/               # the 7 subsystems
│   │   ├── users/             # userRepo.js, userController.js, routes.js
│   │   ├── listings/          # listingRepo.js, listingController.js, routes.js
│   │   ├── discovery/         # discoveryRepo.js, discoveryController.js, routes.js
│   │   ├── reservations/      # reservationRepo.js, reservationController.js, routes.js
│   │   ├── payments/          # paymentRepo.js, paymentController.js, routes.js
│   │   ├── notifications/     # index.js (repo + controller + listeners), routes.js
│   │   └── dashboard/         # dashboardController.js, routes.js
│   ├── shared/
│   │   ├── db/                # connection pool + migrations
│   │   ├── events/            # eventBus.js (observer pattern)
│   │   └── middleware/        # jwt auth + role authorization
│   ├── index.js               # main entry, wires everything together
│   └── seed.js                # demo data
├── docker-compose.yml
└── README.md

How Modules Communicate

Rule: Modules never import another module's repository directly (with one documented exception in reservations).

Events: When something happens (listing created, reservation made, order fulfilled), the module emits an event. Other modules listen and react.

Example flow: Consumer reserves food → Reservation module emits reservation:created → Payment module hears it and creates a payment record → Notification module hears it and notifies the vendor

The One Exception

The Reservation controller imports listingRepo (to check availability) and userRepo (to look up consumers by QR token). This is documented in the code with a comment explaining that in a future microservices setup, these would become API calls instead of direct imports.

Architecture Analysis

monolith vs microservices with actual numbers

Response Time Comparison

Tracing the "reserve food" flow:

StepMonolithMicroservices
Check listing availability~5-10ms (in-process call)~15-30ms (HTTP to Listing service)
Create reservation~10-20ms (single DB transaction)~10-20ms (own DB)
Emit/publish event~0ms (in-memory)~5-15ms (message broker)
Process payment~5-10ms (listener)~10-20ms (separate service)
Send notification~5-10ms (listener)~10-20ms (separate service)
Total25-50ms55-115ms

Why: In-process function calls and in-memory events have near-zero overhead vs network calls between services.

Scalability Comparison

AspectMonolith (3 instances)Microservices (7 services)
Containers3 app + 2 DB7+ services + 7+ DBs + broker + gateway
Deploy complexityOne artifact7+ independent deploys
Team needed1-2 devs3-5 devs minimum
Infra cost2-3 VMs8-12 VMs

Bottom Line

At 5000 users, monolith wins on simplicity, speed, and cost. Microservices win when you have 500K+ users and need to scale specific services independently. Our clean module boundaries make migration possible later without a rewrite.

Likely Evaluation Questions

practice these before the evaluation

Why did you choose a monolith instead of microservices?
We are a team of 5 building from scratch. Microservices would need 7+ services, 7+ databases, a message broker, API gateway, and a lot more infrastructure. That is overkill for 5000 users. We chose a modular monolith with clean boundaries so we can extract services later if needed. It gives us the discipline of microservices without the operational cost.
How do you prevent double-booking?
The reservation flow uses a PostgreSQL transaction with SELECT FOR UPDATE row-level locking. When two consumers try to reserve the last item at the same time, the database locks the listing row. Only one transaction succeeds, the other gets rolled back with a clear error message. This happens in reservationRepo.js.
Why online-only payments? Why no cash on delivery?
Our primary goal is protecting vendors from no-shows. If consumers can book without paying, they might never come to pick up. This wastes vendor time and food. Online payment means the consumer has already committed. We can add COD later once vendor trust is established.
Why is the QR code static and not generated per reservation?
Per-reservation QR codes add complexity like crypto signing, expiry, and generation on every booking. A static QR tied to the consumer's account is simpler. When the vendor scans it, the system just looks up active orders between that consumer and vendor. If there are multiple, the vendor picks which one. Simple, reliable, and no security risk since the QR is useless without a paid reservation.
Explain the Observer pattern in your system.
We have an event bus (eventBus.js) that implements the observer pattern. When something important happens, like a listing is created or a reservation is made, the module emits an event instead of calling other modules directly. Payments, Notifications, and Dashboard listen for these events and react independently. This keeps modules loosely coupled. Adding a new feature like email notifications is just adding a new listener, zero changes to existing code.
What is the Repository pattern and why did you use it?
Each module has its own repository file that handles all database queries for that module's tables. No module writes raw SQL outside its repo. This means if we switch databases for one module later, we only change that repo file. It also makes testing easier since we can mock the repo. And it enforces our rule that modules don't access each other's data directly.
What happens if the server crashes during a reservation?
The reservation uses a database transaction. If the server crashes mid-transaction, PostgreSQL automatically rolls back. The listing quantity stays unchanged, no payment is created, and no partial state is left behind. The consumer just sees an error and can retry.
How would you scale this beyond 5000 users?
First, add more monolith instances behind a load balancer since the app is stateless. Then add PostgreSQL read replicas for the Discovery module's heavy read queries. If specific modules become bottlenecks (say Discovery at 100K users), we can extract that module into its own service since the boundaries are already clean. The event bus would be replaced with a message broker like RabbitMQ for that extracted service.
Why did you prioritize vendors over consumers?
Without vendors listing food, the platform has nothing to offer consumers. It is a supply-driven marketplace. Similar to how Swiggy and Zomato started by onboarding restaurants first before focusing on consumer marketing. Once we have enough supply, consumer growth follows naturally.
Why Next.js instead of plain React?
Next.js gives us file-based routing, server-side rendering, and API route proxying out of the box. With plain React we would need to set up React Router, configure a proxy for the backend API, and handle a lot more boilerplate. Next.js saves time for a prototype.
How does the notification system work?
The Notification module listens for events from the event bus (like reservation:created or listing:created). When it hears an event, it creates a notification record in the database and pushes a real-time alert via WebSocket to the relevant user. Users are organized in socket.io rooms by their user ID so we can target specific users.
What is IEEE 42010 and how did you apply it?
IEEE 42010 is a standard for architecture documentation. It says you should identify stakeholders, their concerns, and the architectural viewpoints that address them. We identified 5 stakeholders: vendors, consumers, developers, platform admin, and payment gateway. Each has specific concerns that map to specific subsystems. For example, vendors care about listing speed, payment reliability, and analytics, which maps to the Listing, Payments, and Dashboard subsystems.
What tools did you use to build this?
We used Claude (by Anthropic) to help write the prototype code. The team focused on architecture decisions, requirements analysis, design patterns, testing, and the technical report. The code was reviewed, tested, and debugged by the team after generation.

Who Did What

know your own contribution well for the evaluation

Important: We used Claude to help write the prototype code. The team's work was on architecture, design, requirements, decisions, testing, and the report. Make sure you can explain your part clearly.
Vivek R 2019112020
Led the architecture discussions. Pushed for the modular monolith approach. Helped define the 7 subsystems and their boundaries. Contributed to ADR 1 (monolith) and ADR 2 (WebSockets). Tested the reservation flow for edge cases like double-booking and race conditions.
Manjri Malhotra 2025900015
Drove the requirements gathering and business reasoning behind each NFR. Heavily involved in the IEEE 42010 stakeholder analysis. Suggested keeping Notifications as a standalone subsystem. Worked on the architecture comparison section of the report.
Sai Bharath Besta 2025901008
Focused on database schema design and table relationships. Defined repository pattern boundaries. Wrote ADR 3 (self-serve onboarding). Tested the discovery module's distance-based search query.
Sai Vineesh 2019112010
Contributed to payment flow design and wrote ADR 4 (online-only payments). Researched PhonePe Business API and settlement cycle. Helped define architectural tactics (read replicas, rate limiting). Tested the end-to-end vendor-to-consumer flow.
Abdul Muqtadir 2024901004
Researched IEEE 42010 and drafted stakeholder analysis. Worked on ADR 5 (QR code design), originally proposed per-reservation QR before the team simplified it. Defined WebSocket notification architecture. Helped with the scalability analysis.