Site Icon Matthew Raynor
← Back to Portfolio

Matthew Raynor Photography Store

Full e-commerce platform for fine art photography with an AI shopping assistant, semantic search, and wall visualization.

Project Overview

A complete e-commerce platform for fine art drone and seascape photography targeting the Hamptons luxury art market. Features an AI shopping assistant built with LangChain and Claude that can search photos semantically, manage carts, visualize prints on customer walls using depth estimation, and handle checkout through conversation.

The Challenge

Selling fine art photography online requires more than a product grid — customers need to discover art by mood and meaning, visualize how pieces look in their space, and feel confident about size and materials before purchasing.

The Solution

Built an AI shopping assistant with 14 tools that searches photos semantically using pgvector embeddings, manages carts, filters by color/mood/subject, checks gift card balances, and answers sizing questions. The 'See It In Room' feature uses MiDaS depth estimation + RANSAC plane-fitting to composite prints at correct scale on customer-uploaded wall photos.

Technology Stack

Backend
Django 5 Django REST Framework PostgreSQL pgvector Celery Redis
Frontend
Next.js 15 TypeScript Tailwind CSS
Ai
LangChain Claude API Claude Vision OpenAI Embeddings MiDaS Depth Estimation
Integrations
Stripe Checkout AWS S3 Resend MailerLite Sentry
Deployment
Railway Netlify

Key Features

AI shopping assistant with 14 tools — search, cart, checkout, wall visualization all through conversation

pgvector semantic search using OpenAI text-embedding-ada-002 for meaning-based photo discovery

Claude Vision auto-generates all photo metadata (descriptions, moods, colors, subjects)

'See It In Room': MiDaS depth estimation + RANSAC plane-fitting composites prints at correct scale on wall photos

Stripe Checkout with gift card redemption and promotional codes

Session-based cart persistence with cross-origin cookie handling

Next.js App Router with server-side rendering — server components (internal API) vs client components (public API)

Business Impact

Semantic photo discovery — customers find art by meaning, not just keywords

Realistic wall visualization reduces purchase hesitation for expensive prints

Conversational commerce handles the entire shopping experience through the AI assistant

Automated metadata generation eliminates manual photo tagging

Technical Achievements

MiDaS + RANSAC pipeline that accurately places prints on real walls

Semantic search that understands 'moody ocean sunset' or 'bright aerial beach'

Full conversational commerce — customers can browse, add to cart, and check out without leaving the chat

Claude Vision metadata pipeline that auto-tags every photo

Future Enhancements

AR-based room visualization using device camera

Multi-currency support for international buyers

Artist collaboration marketplace

Technical Implementation

Photo embeddings generated with OpenAI text-embedding-ada-002, stored in PostgreSQL with pgvector for cosine similarity search. The AI assistant uses LangChain with Claude and 14 tools for a complete shopping experience. The 'See It In Room' feature uses MiDaS depth estimation to find walls in uploaded photos, then RANSAC plane-fitting to composite prints at physically accurate scale. Next.js App Router separates server components (which call internal API) from client components (which use the public API).

Interested in This Project?

View the source code or see it in action