abi/screenshot-to-code Review: Turn Screenshots into HTML, Tailwind, React, or Vue Code
Your go-to source for all things related to artificial intelligence. Our website is dedicated to providing you with the latest news, insights, more...

Your go-to source for all things related to artificial intelligence. Our website is dedicated to providing you with the latest news, insights, more...
Ever stumbled upon a stunning website design and wished you could replicate it? Or maybe you're a developer tired of manually rewriting code from mockups? Well, fret no more! abi/screenshot-to-code emerges as a revolutionary tool, bridging the gap between visual inspiration and functional code.
What is abi/screenshot-to-code?
Imagine dropping a screenshot of a captivating website, and poof! You receive clean, editable code in your preferred format – HTML, Tailwind CSS, React, or Vue. That's the magic of abi/screenshot-to-code. This open-source project leverages the power of deep learning to analyze your screenshot and generate corresponding code, saving you precious time and effort.
Key Features and Benefits:
However, it's essential to be realistic about potential limitations:
Who Can Benefit?
Beyond the Basics:
Ready to Unleash Your Creativity?
Head over to the abi/screenshot-to-code GitHub repository (https://github.com/abi/screenshot-to-code) to explore its potential, download the tool, and join the ever-growing community of developers and designers reshaping the way we bring visual ideas to life.
Remember: abi/screenshot-to-code is not a silver bullet, but it's a powerful tool that can significantly enhance your workflow and ignite your creative coding process. So, start uploading those screenshots and witness the magic of code generation unfold!
This simple app converts a screenshot to code (HTML/Tailwind CSS, or React or Bootstrap or Vue). It uses GPT-4 Vision to generate the code and DALL-E 3 to generate similar-looking images. You can now also enter a URL to clone a live website!
https://github.com/abi/screenshot-to-code/assets/23818/6cebadae-2fe3-4986-ac6a-8fb9db030045
See the Examples section below for more demos.
🆕 Try it here (bring your own OpenAI key - your key must have access to GPT-4 Vision. See FAQ section below for details). Or see Getting Started below for local install instructions.
The app has a React/Vite frontend and a FastAPI backend. You will need an OpenAI API key with access to the GPT-4 Vision API.
Run the backend (I use Poetry for package management - pip install poetry
if you don't have it):
cd backend
echo "OPENAI_API_KEY=sk-your-key" > .env
poetry install
poetry shell
poetry run uvicorn main:app --reload --port 7001
You can also run the backend (when you're in backend
):
poetry run pyright
Run the frontend:
cd frontend
yarn
yarn dev
Open http://localhost:5173 to use the app.
If you prefer to run the backend on a different port, update VITE_WS_BACKEND_URL in frontend/.env.local
For debugging purposes, if you don't want to waste GPT4-Vision credits, you can run the backend in mock mode (which streams a pre-recorded response):
MOCK=true poetry run uvicorn main:app --reload --port 7001
backend/.env
or directly in the UI in the settings dialogIf you have Docker installed on your system, in the root directory, run:
echo "OPENAI_API_KEY=sk-your-key" > .env
docker-compose up -d --build
The app will be up and running at http://localhost:5173. Note that you can't develop the application with this setup as the file changes won't trigger a rebuild.
NYTimes
Original | Replica |
---|---|
Instagram page (with not Taylor Swift pics)
https://github.com/abi/screenshot-to-code/assets/23818/503eb86a-356e-4dfc-926a-dabdb1ac7ba1
Hacker News but it gets the colors wrong at first so we nudge it
https://github.com/abi/screenshot-to-code/assets/23818/3fec0f77-44e8-4fb3-a769-ac7410315e5d
🆕 Try it here (bring your own OpenAI key - your key must have access to GPT-4 Vision. See FAQ section for details). Or see Getting Started for local install instructions.
You don't need a ChatGPT Pro account. Screenshot to code uses API keys from your OpenAI developer account. In order to get access to the GPT4 Vision model, log into your OpenAI account and then, follow these instructions:
You have to buy some credits. The minimum is $5.
Go to Settings > Limits and check at the bottom of the page, your current tier has to be "Tier 1" to have GPT4 access
Some users have also reported that it can take upto 30 minutes after your credit purchase for the GPT4 vision model to be activated.
If you've followed these steps, and it still doesn't work, feel free to open a Github issue.
Hello, I am AKM