Aspiring AV engineer focused on path/motion planning and Reinforcement Learning. I'm researching methods to develop more effective model-free RL policies for mid-to-end driving (mid-level representations → low-level control). USC Men's Track & Field Athlete (2025 indoor & outdoor national titles); bilingual (English/Spanish). Targeting full-time roles, Spring 2026
A modern fragrance discovery site where users can browse brands, explore note groups, and search fragrances with fast, typed endpoints. The project is "Full-Stack" Next.js Frontend, Django-API, PostgreSQL database. A lightweight RAG chatbot powered by youtube captions makes searching smoother.
The frontend uses Next.js (App Router) with React and JavaScript for a fast, accessible SPA/SSR hybrid. Pages are routed by slug (e.g., brands, notes, fragrance detail), with incremental/static rendering where it helps performance. Styling favors a modern, component-driven approach (responsive grid cards, clean typography, and dark-mode support via next-themes). Data access is via fetch calls to the external API, keeping UI code framework-pure and portable. Client state is kept minimal; most screen data comes from server-side or on-demand fetches to keep hydration light and UX snappy. The result is a crisp catalog experience that scales without coupling UI to database details.
The backend is a separate API service to keep concerns clean and deployment flexible. It exposes typed, versioned endpoints for brands, fragrances, and notes (list, detail, search, and filters), with predictable pagination and slug-based lookups. The API handles input validation, error shaping, and CORS for the frontend, and leaves room for rate limits and caching. A small utilities layer (aromarch-db-utils) supports tasks like seeding, data backfills, and one-off scripts. This separation means the API can scale (and be tested) independently and allows future swaps—e.g., alternate auth, search providers, or specialized workers—without touching the UI.
Built on PostgreSQL with pgvector for semantic search, the schema centers on `brands`, `fragrances`, `notes`, and a join table `fragrance_notes` to model the many-to-many relationship between scents and notes. Each entity has a stable `slug`, strict foreign keys, and unique constraints; common lookups (by brand, slug, or ID) are backed by B-tree indexes to keep catalog queries fast. Descriptive text (e.g., fragrance summaries, brand bios, note-group blurbs) is embedded and stored in `vector` columns (dimension matched to the embedding model), enabling nearest-neighbor queries via pgvector to power “similar fragrances” and precise RAG retrieval. For scale, we use pgvector’s ANN indexing (e.g., HNSW/IVF-Flat depending on version and dataset size) to keep latency low, while the relational core stays normalized and migration-friendly so future features—seasonality tags, longevity/sillage, user lists—drop in without schema churn.
The chatbot blends Aromarch’s own information (brands, fragrances, note-group blurbs) with a set of fragrance YouTubers. A script pulls each channel’s captions/transcripts, cleans them, splits them into sentence/paragraph-aware chunks with slight overlap, and embeds those chunks; vectors plus rich metadata (channel, video, timestamp, date, language, topics) are stored in PostgreSQL with pgvector. At question time, Aromarch runs hybrid retrieval—vector similarity (ANN via pgvector) plus lightweight keyword filtering, which can be optionally scoped by channel or recency, then re-ranks the top passages and assembles an answer with inline citations (video title + timestamp) so users can jump to sources. The pipeline deduplicates near-identical clips, respects a whitelist of channels, and refreshes incrementally so new uploads become searchable without full reindexing. Providers for embeddings/LLMs are pluggable, making it easy to switch between local and hosted setups as the project scales.
This project is still being developed incrimentally while I go to school so it is currently private on GitHub. Uppon recruiter request I will gladly give a live tour of the site!
Led a group of 5 people to build an autonomous, model-based, land rover. Evaluation involved getting through increasingly complex tasks.
This diagram shows the wiring and data flow for the "finished" vehicle. A LiPo battery powers a DC-DC regulator, the mini-PC, the microcontroller, and the motor drivers. The mini-PC connects by USB to the GPS puck and webcam, and over USB/serial to the microcontroller. The microcontroller aggregates sensor inputs (e.g., range/IMU/encoders) and sends commands to the motor driver that turns the drive motor. An RC receiver provides a manual-override/E-stop path to the controller. Throuout all 4 tasks, the vehicle baisc setup remains contant with sensors removed/added depending on the tasks requirements.
We where tasked with validating the vehicle platform by completing GPS waypoint-following exercise. This end-to-end trial verified the emergency-stop system, wiring integrity, motor drivetrain control, and GPS receiver performance, establishing a functional baseline for future integration and testing
Since we were dealing with limited computational capabilities (old hardware), we decided to shut off all sensors, and their respective ROS tasks, and most microprocessor tasks. This allowed the vehicle to more smoothly follow the GPS waypoints set in Mission Planner.
In this project, the vehicle performed waypoint-based navigation within the laboratory. A predefined set of waypoints (x, y, z) was loaded into the onboard MATLAB controller, which generated the motion commands to drive the platform between targets. Each waypoint was physically marked on the floor to provide ground truth for error measurement. The objective was to visit all waypoints with minimal positional error while maintaining continuous localization (i.e., without losing track of its position). No sensrs other than encoders where allowed.
We enabled encoder-based odometry and used the center-wheel tick counts to compute wheel speeds. From these we derived the vehicle's linear velocity and yaw rate, then integrated a differential-drive model to estimate pose (x,y,θ) (x,y,θ) via dead-reckoning. A waypoint tracker consumed this pose to command motion between targets. To handle skid/slip, we monitored divergence between expected and encoder-derived progress; when error exceeded a threshold, the controller slowed, re-initialized at the nearest waypoint, and resumed the route.
In this project, the vehicle executed a predefined loop in the laboratory using an Extended Kalman Filter (EKF) that fused wheel-encoder odometry, lidar, and sonar. Prior to testing, we surveyed the room to build an a priori occupancy map from lidar/sonar; during navigation, the onboard MATLAB controller used this map to associate/gate range measurements and constrain EKF updates while the filter propagated a differential-drive motion model from encoder-derived velocities. Performance was validated by completing the loop ten consecutive times without veering off course—our pass/fail criterion indicating the EKF was supplying sufficiently accurate state estimates. Ground-truth floor marks were used to quantify residual error.
We disabled GPS/camera to reduce compute load and pre-mapped the lab with lidar/sonar. An EKF fused encoder odometry with map-gated lidar/sonar to estimate pose while following a fixed loop. If deviation exceeded a threshold, the controller re-projected the pose onto the path via encoder odometry and corrected mid-segment before continuing. Physical setup was the same for projects 3 and 4.
(From Assignment Description) - Start from an unknown pose and drive to a single specified goal without any preloaded intermediate waypoints. Use the prior map of the lab (walls and tables only) plus on-board sensing to localize yourself, detect unmodeled obstacles (e.g., chairs), and generate waypoints either from the map or on the fly as you progress. Plan globally against the prior map, avoid newly detected obstacles locally, maintain a consistent pose estimate, replan when blocked, and reach the goal safely without collisions.
We tackled this by fusing wheel-encoder odometry with lidar/sonar in an EKF for real-time pose (x,y,θ) (x,y,θ), then running A* over the prior occupancy grid to seed a global corridor to the goal. From that corridor we emitted adaptive waypoints (distance-based or at turns) and followed them with a cross-track/heading controller. A local planner (velocity obstacles/DWA style) consumed live lidar to steer around unknown objects and temporarily veto unsafe motions. When obstacle inflation or pose uncertainty exceeded thresholds, we replanned the global path and regenerated waypoints; if sensors dropped out, we fell back to short-horizon dead-reckoning until measurements returned. All runs logged pose, sensor traces, and replan events to evaluate accuracy and robustness.
Over two semesters, I led the team responsible for ROS 2 path/motion planning and electronics integration in a three-team project.
We where given a budget of $2000, which was thin. Since my group only needed to develop software, and the previous team in charge of the project had already bough most of the electronics, we where allotted $300. Additionally, ROS2 had just come out as the 2nd semester started, so our sponsor asked us to completely switch from ROS1 to ROS2, causing a lot of finished parts to fail and need refactoring during the transfer. "Success" as a whole was defined by making a vehicle that could qualify for IGVC, though due to the budget constraints, we where not expected to finish during our time there, and the project would be passed on to another group. Success for out group meant, haveing software that could effectively help the vehicle traverse a simulated environment and that would turn the motors in an appropriate manner. Our vehicle had a few sansors; LiDAR, RGB Camera, Depth Camera, and was run on ArduPilot's CubeOrange autopilot. Gazebo was used to simulate the vehicle
Our system is organized into three layers—Sensor, Controller, and Hardware—with clear, safety-first interfaces. GPS, LiDAR, and a camera in the Sensor layer provide global position, obstacle ranges, and visual context. These feeds go to the Controller layer where a ROS 2 path-/motion-planning stack fuses them to build a local map and generate waypoint or velocity setpoints. Setpoints are handed to a Cube Orange autopilot that runs the real-time control loop and state estimation; it enforces limits and failsafes, then converts commands to actuator outputs (PWM/CAN) for the Hardware layer. A Mission Planner ground station supervises configuration, mode changes, and telemetry/logging over the autopilot link. The Hardware layer contains the Motor Control Block (ESCs/drivers), a regulated Power Supply, and Relay & Switching Controls for power-gating and E-stop. An Arduino handles non-critical, discrete I/O (e.g., relays, indicators, limit switches) and reports status upstream. This split keeps heavy perception and planning on ROS 2 while hard real-time actuation stays on the autopilot, yielding a modular, testable stack with deterministic control and independent safety paths.
Our ROS 2 stack separates perception, navigation, and actuation: sensor drivers publish LiDAR (/scan), camera (/camera/image_raw), and GNSS/IMU (/fix, /imu), while the Cube Orange autopilot provides low-latency odometry and the tf tree (map→odom→base_link). Navigation uses Nav2 with global/local costmaps (obstacle + inflation from LiDAR), a Smac/A* planner that outputs paths, and a DWB or Pure Pursuit controller that produces /cmd_vel; bt_navigator coordinates goals and recoveries. A MAVROS2 bridge converts /cmd_vel and waypoints into MAVLink setpoints for the Cube Orange, which enforces limits and drives the motors (PWM/CAN). A microcontroller bridge exposes relays, E-stop, and limit switches as ROS topics/services. QoS is SensorDataQoS for raw sensor streams and Reliable keep-last for control/odometry, launched via a single bringup file with YAML parameters; safety includes a heartbeat watchdog that zeros /cmd_vel, a dual-path E-stop (topic plus hardware relay), and autopilot geofence/manual override.
Path and motion planning use ROS 2 Nav2. Globally, we run the Smac Planner (A\*/Hybrid-A\*) over a layered costmap (static + obstacle + inflation) to compute a near-optimal, collision-free path in the map frame; the path is smoothed and discretized for downstream tracking. Locally, we use the Dynamic Window Approach via Nav2’s DWB controller: at 20–50 Hz it samples admissible velocity commands (vx, ω) under acceleration and curvature limits, forward-simulates short trajectories on the local costmap, and scores them with weighted critics (obstacle clearance, path alignment, goal heading, velocity) to select /cmd\_vel. Collision checks use the robot footprint against the inflated costmap, and kinematic/dynamic limits ensure feasibility and comfort. The autopilot then tracks these velocity setpoints and applies final safety limits and failsafes.
The ArduPilot Cube Orange is the real-time controller and I/O hub, driving the motor controllers via PWM (or CAN where available) and exchanging telemetry/commands with the ROS 2 companion through MAVLink over UART. A 2.4 GHz RC receiver (antenna pair mounted externally) feeds the Cube Orange via SBUS/PPM for manual override and safety-critical mode changes. An Arduino handles auxiliary I/O—relay/E-stop coil, status LEDs, and discrete sensors—and reports state over a simple serial link. Power is distributed from the main battery through fused rails and DC-DC regulators (logic 5 V isolated from high-current drives), with a common-ground star point, ferrites on noisy lines, and shielded/twisted signal runs to reduce EMI. A hardware E-stop physically removes drive power independent of software; the autopilot enforces geofence and speed limits, providing a layered fail-safe path.
The vehicle frame was not completed in time for full end-to-end field trials, but the vision, motion-planning (Nav2), and electronics subsystems were validated. Using an off-chassis test rig, we performed walk-around tests with the LiDAR, companion computer (ROS 2), Cube Orange autopilot, and motor/driver assembly: live scans drove the global/local planners to produce velocity setpoints, the autopilot translated these into motor commands, and the motors actuated as expected. This hardware-in-the-loop setup simulated on-vehicle operation and verified end-to-end dataflow (sensors → Nav2 → MAVLink/autopilot → motor drivers), timing, and responsiveness. RViz visualization and rosbag2 logs confirmed correct costmap updates and controller behavior under representative scenarios.
Assisted in developing a model-based mini-vehicel. Evaluation involved getting through various tasks.
Autonomously navigate a known tile-based obstacle course to a specified goal without touching obstacle tiles and while staying inside the workspace. We're given the start, goal, and obstacle positions ahead of time (compile-time inputs), so we had to (1) design and describe the robot, and (2) implement a navigation strategy that works for arbitrary valid layouts. Accuracy mattered more than speed or shortest path
Two side motors provide drive and two rear ball casters add balance. Best-performing wheels were 7 cm diameter × 3.5 cm wide; thinner 8 cm treaded wheels made forward motion turbulent and caused veering. Only sensor is a gyroscope, whose readings were inconsistent, which led to constant on-the-go recalibrations
We computed a goal-centered Manhattan-distance field on the 10×16 grid (goal = 0; obstacles set very large) using BFS so the resulting “navigation function” has no spurious local minima, then extracted a shortest path with a consistent tie-break (Up → Down → Right → Left) to favor stable vertical motion; on the robot, each step turned to the target heading, drove half a tile, performed a gyroscope check/correction, completed the tile, and rechecked, with directions mapped to specific target angles and runs starting at 0°
Design and build a behavior-based fire-alarm robot that, starting from an unknown position in an indoor environment with ~15 cm walls (no exterior openings), explores to locate a “fire” (a lit candle on a distinct half-tile area), then raises an alarm and attempts to extinguish it. We where told to implement a coordinated set of behaviors (wander/search), single-direction wall-following, goal finding/identification, and extinguish using a behavior arbitration scheme such as subsumption or weighted averaging. The had to be able to sense and follow walls (e.g., dual touch-sensor bumpers) and detect the fire; global localization isn’t required since both start and fire locations are initially unknown.
build with a platform mounting the brick directly above the motors to keep weight centered and leave clearance underneath for the color sensor (positioned ~1 cm above the floor for reliable detection). Two front-mounted touch sensors are tied together by a bumper so light glancing hits still trigger contact; the bumper was iterated for durability while staying loose enough not to bind the sensors. An ultrasonic sensor sits on the left, near the rear, giving space for turns during exploration.
We developed a behavior-based controller that alternates between wall-following and wandering. Wall-following engages when a left-side wall is detected within ~30 cm by the ultrasonic sensor and tries to maintain ~7 cm standoff; front bumpers override distance-keeping (left hit → turn right, right hit → turn left; both hit → turn based on presence of a left wall). When no left wall is sensed, the robot nudges left and switches to wander. Wandering is “structured random”: go straight briefly, then make a short random left/right turn (≈0–60° via timed turns), repeating until either a wall is found (return to wall-following) or the colored half-tile is detected, which triggers alarm/extinguish. Key tuning solved loops and over-hugging: lowering detection thresholds and balancing randomness with control improved coverage and reliability. When the flame was found, the vehicle's fan would turn on to put it out.
The goal of the project is to build a robot that can play a simplified version of soccer using an IR ball and IR seeker sensors. Two teams will play against each other on a field that has 3 zones: two defense zones that only the defensive team's robot can be in and a middle zone that both teams' robot can be in. The goal for each team is to have the ball cross the other team's base line to score a goal. Once a goal is scored, the robots are moved into their team's defense zone and the ball is placed in the center. Then the game is started again with the team that has been scored on getting a 1s head start.
EV3 brick on a detachable top platform; compact footprint ~9″×8.5″×6″; extra stabilizing rods and redundant under-frames for impact resistance; weight shifted rearward for traction. A pincer bumper up front both traps and “kicks” the ball. Sensors: HiTechnic IRSeeker mounted just above the bumper, ultrasonic co-located for frontal range, color sensor ~1 cm above floor to detect the enemy line, and a gyroscope (angles normalized mod 360) for heading.
A behavior-based controller with (1) ball_search: wander with guarded randomness, avoid walls if ultrasonics ~15 cm, and perform ±20° sweeps after failed turns until either the ball is detected (handoff to aggressive) or the enemy line is sensed (back up + turn); (2) aggressive: if ball is off-axis and weak, re-orient slowly; if centered and close (signal over threshold) and facing the enemy goal, charge; if centered but far, approach slowly to avoid knocking it away; continually fall back to search if the signal drops; (3) charge/dribble burst motions (200–500 ms at ~2000 deg/s); (4) red_turn: on enemy-line detect, back up and choose the safer turn using current gyro heading. To keep sensing responsive during motion, timed motor calls were replaced by short for-loop steps that poll sensors.
Implemented a polynomial version of ALEX (A learned index structure), into DuckDB to speed up indexing.
ALEX (Adaptive Learned Index) replaces B-tree splitters with tiny models that predict key→position, adapting its layout to data/workload so lookups are fast on smooth or skewed keys while remaining robust in worst cases.
DuckDB is an in-process OLAP engine with a clean vectorized pipeline, single-file databases, and an extension API—perfect for testing learned indexes on read-heavy, sorted columns without running a server.
We built an ALEX-style learned index for DuckDB (read-only analytics) that swaps purely linear models for lightweight piecewise polynomial segments, improving accuracy under curvature and shifting key distributions. At query time, the index predicts key→position for point and range filters, then does a tiny local refinement to get exact bounds (for ranges, we predict both ends and scan the narrow slice). The structure is stored alongside the table (models + segment metadata) and rebuilt offline when data change—keeping runtime fast, simple, and deterministic.
On synthetic and TPC-H-like read-only tests, point lookups and narrow ranges saw consistently lower latency than full scans and binary search on the raw column by ~2-5%, with the largest wins on smooth or skewed key distributions(3-7%). Memory overhead stayed modest (tiny models per segment), build time was linear in rows, and we observed graceful degradation on hard cases (e.g., highly irregular keys) thanks to the fallback local search. In short: compact index, fast probes, predictable behavior—well-suited to DuckDB’s analytic workflow. Using Cramer’s rule to fit the polynomials makes training slow and numerically unstable on large datasets. This prototype shows the promise of polynomial learned indexes, but it doesn’t yet scale; switching to QR/SVD or online solvers theoretically should improve performance, but we did not have the time needed to test this.
My main interest is in autonomous vehicle development (Path-Planning, Motion-Planning, and Controls). I have worked on several projects involving these vehicles and their subsystems. Most of my experience is in rule-based ground vehicles, but I am actively working on researching model-free methods in USC's AutoDrive Lab. I wish to expand my area of knowledge into aquatic vehicels as well, and am working to involve myself with organizations in this field.
Beyond AVs, I build full-stack tools and work on databases. I created Aromarch, a fragrance library with a Next.js + CSS frontend, a decoupled Django API, and a PostgreSQL schema; I also added a retrieval-augmented (RAG) chatbot powered by YouTube captions to improve discovery and search. On the database side, I implemented a modified version of ALEX's learned index structure to DuckDB, yielding ~3-8% faster retrieval at certain scales.
Track & field has been a big part of my life for over a decade. I ran at UT Arlington and now I run at USC. Being on a team means showing up early, heavy lifts, leading warm-ups, checking in on guys after tough workouts, keeping practice fun but focused, and representing the school the right way. I've had highs and lows, but the sport has taught me consistency, leadership, and resilience. I've learned to trust the thousands of miles in my legs, bounce back from rough days, and lead by example when no one's watching; lessons that permiate into everything I do.
In my free time I like to unwind by reading up on fragrance chemistry; how ingredients and concentration affect balance, diffusion, and longevity. I sample often, write reviews, and am building Aromarch to organize them.
My current picks: Blessed Baraka, Encre Noire À L’Extrême, Acqua di Giò Absolu.