Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
MPSimmons
24 days ago
|
parent
|
context
|
favorite
| on:
Llama3 running locally on iPhone 15 Pro
I can't wait until Groq or someone else release tiny mobile inference engines specifically for phones and the like.
Largeapplemodel
19 days ago
[–]
There's already tiny LLMs for this. They're bad. Because it's not enough information to be coherent.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: