Instantly Code Your Vision: A Deep Dive into abi/screenshot-to-code
Ever stumbled upon a stunning website design and wished you could replicate it? Or maybe you're a developer tired of manually rewriting code from mockups? Well, fret no more! abi/screenshot-to-code emerges as a revolutionary tool, bridging the gap between visual inspiration and functional code.
What is abi/screenshot-to-code?
Imagine dropping a screenshot of a captivating website, and poof! You receive clean, editable code in your preferred format – HTML, Tailwind CSS, React, or Vue. That's the magic of abi/screenshot-to-code. This open-source project leverages the power of deep learning to analyze your screenshot and generate corresponding code, saving you precious time and effort.
Key Features and Benefits:
- Effortless Code Generation: Simply upload your screenshot, choose your desired output format, and let the tool do the magic.
- Multiple Framework Support: Generate code for various popular frameworks, catering to diverse developer preferences.
- Open-Source Transparency: Access and contribute to the code for complete control and potential customization.
- Active Development: Regular updates and improvements ensure the tool stays ahead of the curve.
However, it's essential to be realistic about potential limitations:
- Complexity Matters: While ideal for simple layouts and well-formatted code, it might struggle with highly intricate designs.
- Screenshot Quality is Crucial: A blurry or unclear screenshot can hinder the tool's accuracy.
Who Can Benefit?
- Web Developers: Streamline your workflow by quickly converting design mockups into working code.
- Designers: Validate your UI/UX concepts by generating the underlying code for feasibility testing.
- Front-End Enthusiasts: Learn from existing designs by analyzing their code structure through screenshots.
Beyond the Basics:
- Community-Driven: Engage with the active abi/screenshot-to-code community on GitHub for support, feedback, and collaboration.
- Continuous Evolution: Stay updated with the latest developments and contribute to making the tool even better.
Ready to Unleash Your Creativity?
Head over to the abi/screenshot-to-code GitHub repository (https://github.com/abi/screenshot-to-code) to explore its potential, download the tool, and join the ever-growing community of developers and designers reshaping the way we bring visual ideas to life.
Remember: abi/screenshot-to-code is not a silver bullet, but it's a powerful tool that can significantly enhance your workflow and ignite your creative coding process. So, start uploading those screenshots and witness the magic of code generation unfold!
screenshot-to-code
This simple app converts a screenshot to code (HTML/Tailwind CSS, or React or Bootstrap or Vue). It uses GPT-4 Vision to generate the code and DALL-E 3 to generate similar-looking images. You can now also enter a URL to clone a live website!
https://github.com/abi/screenshot-to-code/assets/23818/6cebadae-2fe3-4986-ac6a-8fb9db030045
See the Examples section below for more demos.
🚀 Try It Out!
🆕 Try it here (bring your own OpenAI key - your key must have access to GPT-4 Vision. See FAQ section below for details). Or see Getting Started below for local install instructions.
🛠 Getting Started
The app has a React/Vite frontend and a FastAPI backend. You will need an OpenAI API key with access to the GPT-4 Vision API.
Run the backend (I use Poetry for package management - pip install poetry
if you don't have it):
cd backend
echo "OPENAI_API_KEY=sk-your-key" > .env
poetry install
poetry shell
poetry run uvicorn main:app --reload --port 7001
You can also run the backend (when you're in backend
):
poetry run pyright
Run the frontend:
cd frontend
yarn
yarn dev
Open http://localhost:5173 to use the app.
If you prefer to run the backend on a different port, update VITE_WS_BACKEND_URL in frontend/.env.local
For debugging purposes, if you don't want to waste GPT4-Vision credits, you can run the backend in mock mode (which streams a pre-recorded response):
MOCK=true poetry run uvicorn main:app --reload --port 7001
Configuration
- You can configure the OpenAI base URL if you need to use a proxy: Set OPENAI_BASE_URL in the
backend/.env
or directly in the UI in the settings dialog
Docker
If you have Docker installed on your system, in the root directory, run:
echo "OPENAI_API_KEY=sk-your-key" > .env
docker-compose up -d --build
The app will be up and running at http://localhost:5173. Note that you can't develop the application with this setup as the file changes won't trigger a rebuild.
🙋♂️ FAQs
- I'm running into an error when setting up the backend. How can I fix it? Try this. If that still doesn't work, open an issue.
- How do I get an OpenAI API key? See https://github.com/abi/screenshot-to-code/blob/main/Troubleshooting.md
- How can I provide feedback? For feedback, feature requests and bug reports, open an issue or ping me on Twitter.
📚 Examples
NYTimes
Original | Replica |
---|---|
Instagram page (with not Taylor Swift pics)
https://github.com/abi/screenshot-to-code/assets/23818/503eb86a-356e-4dfc-926a-dabdb1ac7ba1
Hacker News but it gets the colors wrong at first so we nudge it
https://github.com/abi/screenshot-to-code/assets/23818/3fec0f77-44e8-4fb3-a769-ac7410315e5d
🌍 Hosted Version
🆕 Try it here (bring your own OpenAI key - your key must have access to GPT-4 Vision. See FAQ section for details). Or see Getting Started for local install instructions.
Getting an OpenAI API key with GPT4-Vision model access
You don't need a ChatGPT Pro account. Screenshot to code uses API keys from your OpenAI developer account. In order to get access to the GPT4 Vision model, log into your OpenAI account and then, follow these instructions:
- Open OpenAI Dashboard
- Go to Settings > Billing
- Click at the Add payment details
You have to buy some credits. The minimum is $5.
Go to Settings > Limits and check at the bottom of the page, your current tier has to be "Tier 1" to have GPT4 access
- Go to Screenshot to code and paste it in the Settings dialog under OpenAI key (gear icon). Your key is only stored in your browser. Never stored on our servers.
Some users have also reported that it can take upto 30 minutes after your credit purchase for the GPT4 vision model to be activated.
If you've followed these steps, and it still doesn't work, feel free to open a Github issue.