Hacker News new | past | comments | ask | show | jobs | submit | zerojames's comments login

Congratulations on launching the project!

Making a programming language is a great way to better understand computation and start to explore different means of expressing concepts in code beyond those available in existing languages.

Programming languages are a difficult concept to get one's head around. Having one that implements a lexer, parser, and execution engine -- and is packaged as a resource for others to use -- is an impressive technical feat.

Writing a guide so others can learn is doubly impressive.

Asking a question like "How exactly do programming languages work behind the scenes?" is a great quality to have in your creative pursuits. Not to mention that such questions are the basis of HN: journeys that satisfy our creative curiosities.

Keep making cool things!

---

One piece of feedback: I find the size of the code instruction boxes on the page a bit small. There are no visual indicators that I need to scroll down to see all of the code. I would love for the boxes to be taller so I can read the code and retain all the context without scrolling.

---

Another idea: You may enjoy writing a post about what you learned making the language. What did you find more difficult than you thought? What challenges did you run into? How did you solve them? Why did you choose the syntax you used?

I love reading about the how behind cool projects :D


I have run a few queries on Elicit to understand the product a bit more. I asked about media bias detection and used the topic analysis feature. A minute or so later, I had a list of concepts with citations and links to papers I can look at further. This feels like an _amazing_ tool to do literature overviews and to dive into new academic domains with which one is not familiar.

Try looking at this https://inciteful.xyz/ there’s also a Zotero plugin for it.

Elician here: thank you for sharing our tool and for this praise!

We're glad you're enjoying it.


(I work at Roboflow)

We're actively working on this!

Our ML team has noted that the segmentation masks in particular from PaLiGemma are a bit tedious and unintuitive to decode. We should be pushing out (more!) open source software that uses this model in the coming days.

Look forward to an easy way to fine-tune PaLiGemma and broader support for its task types in `inference`, the package used in the blog post.


The data on comfort using eye tracking is fascinating (4.5 Results and Discussion):

  Participants complained that staring at so many targets made their eyes dry and uncomfortable. Eye fatigue scored lowest among all the questions. Participants gave eye tracking a modest favorable response overall of 4.5, just slightly higher than the mid-point.

A bit tangential but my limited experience playing star citizen with eye tracking technology was exhilating.

With the technology out at the time (and the little i could do with it) i was mostly limited to exaggerated eye movements.

Repeated "full side eye" gestures were definitely uncomfortable (physically) and at times somewhat nausesting.

After a few weeks of steady gameplay things were noticeably better but still not something i could maintain without discomfort for more than 1.5-2hrs.


Summaries are less interesting than my getting the information in the first place. My general hierarchy for sifting is:

Title -> Abstract -> skim directly to the section(s) that are most interesting or answer a question I have.

For most papers, I can gauge interest based on the title. But there are so many!

I have a secondary problem of being able to find research on a topic: quantitative linguistics. Arxiv has a category on Computation and Linguistics, but it is mostly LLMs.


Yup, so you'd like to:

1. Given your resources, be able to the filter out the sections that are relevant to your topic of interest 2. Find resources that related to a topic

In order of severity pain points, is this correct? What is the goal of your research?


What the newsletter app does: it groups materials in up to 5 topics and generates an article for each topic based on the resources you've provided


I will make that substitution, thank you!

Someone told me about that organ. I had just added it to my backlog. I have never played a real organ. I wonder how Taylor Swift's music would sound on an organ...


Reference, for context: https://c2pa.org/

And: the BBC just started using C2PA across some content. The BBC's R&D team talking about it: https://www.bbc.co.uk/rd/blog/2024-03-c2pa-verification-news...


Related critique of the BBC's use of C2PA, and C2PA in general: https://www.hackerfactor.com/blog/index.php?/archives/1024-I...


That was an interesting rabbit hole of articles, thanks. From an earlier article: [1]

> At FotoForensics, I'm already seeing known fraud groups developing test pictures with C2PA metadata. (If C2PA was more widely adopted, I'm certain that some of these groups would deploy their forgeries right now.)

> To reiterate:

> * Without C2PA: Analysis tools can often identify forgeries, including altered metadata.

> * With C2PA: Identifying forgeries becomes much harder. You have to convince the audience that valid, verifiable, tamper-evident 'authentication and provenance' that uses a cryptographic signature, and was created with the backing of big tech companies like Adobe, Microsoft, Intel, etc., is wrong.

> Rather than eliminating or identifying fraud, C2PA enables a new type of fraud: forgeries that are authenticated by trust and associated with some of the biggest names on the tech landscape.

[1] https://www.hackerfactor.com/blog/index.php?/archives/1013-C...


Roboflow | ML Engineers / ML Lead / Field Engineers | Full-time (Remote, SF, NYC) | https://roboflow.com/careers?ref=whoishiring0424

Roboflow is the fastest way to use computer vision in production. We help developers give their software the sense of sight. Our end-to-end platform[1] provides tooling for image collection, annotation, dataset exploration and curation, training, and deployment.

Over 250k engineers (including engineers from 2/3 Fortune 100 companies) build with Roboflow. We now host the largest collection of open source computer vision datasets and pre-trained models[2]. We are pushing forward the CV ecosystem with open source projects like Autodistill[3] and Supervision[4]. And we've built one of the most comprehensive resources for software engineers to learn to use computer vision with our popular blog[5] and YouTube channel[6].

We have several openings available but are primarily looking for strong technical generalists who want to help us democratize computer vision and like to wear many hats and have an outsized impact. Our engineering culture is built on a foundation of autonomy & we don't consider an engineer fully ramped until they can "choose their own loss function". At Roboflow, engineers aren't just responsible for building things but also for helping us figure out what we should build next. We're builders & problem solvers; not just coders. (For this reason we also especially love hiring past and future founders.)

We're currently hiring full-stack engineers for our ML and web platform teams, a web developer to bridge our product and marketing teams, several technical roles on the sales & field engineering teams, and our first applied machine learning researcher to help push forward the state of the art in computer vision.

[1]: https://roboflow.com/?ref=whoishiring0424

[2]: https://roboflow.com/universe?ref=whoishiring0424

[3]: https://github.com/autodistill/autodistill

[4]: https://github.com/roboflow/supervision

[5]: https://blog.roboflow.com/?ref=whoishiring0424

[6]: https://www.youtube.com/@Roboflow


Is there a packaging ecosystem, or does PyPi interoperate? I would love to start writing some Mojo code, but a key thing I'd love to do is make web requests. Any direction you have would be sincerely appreciated!


We're definitely going to explore this further, and have begun some discussions with the community to try and shape what this will look like for Mojo; see https://github.com/modularml/mojo/discussions/1785 for more details.

The discussion there is centered on a project manifest and build tool, and work is underway. Some of our guiding principles are to write this tooling in Mojo, as open source software, and to be focused on integration with existing build systems and ecosystems.

Of course, once we're able to define a Mojo project, and have a principled method of building all Mojo projects in the known universe, the natural next step would be to build out a packaging ecosystem. Again, we're interested in playing well with others, so while it's still early days, I think we'll want to have a well-thought-out interface with PyPI.

As for web requests: try searching GitHub for Mojo projects that may suit your needs: https://github.com/search?q=language%3Amojo+http&type=reposi... -- and by the way, we're also very proud that Mojo has become popular enough on GitHub that it can be searched like this, and its source code is syntax highlighted. If you don't see any Mojo packages on there that you like, try writing one yourself and letting us know on our Discord! Although we don't (yet!) have a build tool, you can clone a Mojo package from GitHub and import it into your project manually, so sharing code is possible -- we're working to make it not just possible, but downright awesome!


This is fun!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: