Hacker News new | past | comments | ask | show | jobs | submit | darthdeus's comments login

Author here, sorry about that, I just deployed a fix, should be readable now. If it's not, here's the first few points

- Once you get good at Rust all of these problems will go away - Rust being great at big refactorings solves a largely self-inflicted issues with the borrow checker - Indirection only solves some problems, and always at the cost of dev ergonomics - ECS solves the wrong kind problem - Generalized systems don't lead to fun gameplay - Making a fun & interesting games is about rapid prototyping and iteration, Rust's values are everything but that - Procedural macros are not even "we have reflection at home" - ...

the list corresponds to the titles of sections in the article.


Author of Comfy here, happy to answer any questions :)

A few relevant links:

- website https://comfyengine.org/ - announcement blog post https://comfyengine.org/blog/first/


build.rs is a source file that you can audit. A binary that has no reproducible build is not auditable even if anyone wanted to.

A single person does not audit all of their dependency tree, but many people do read the source code of some if not many of their dependencies, and as a community we can figure out when something is fishy, like in this case.

But when there are binaries involved, nobody can do anything.

This isn't the same as installing a signed binary from a linux package manager that has a checksum and a verified build system. It's a random binary blob someone made in a way that nobody else can check, and it's just "trust me bro there's nothing bad in it".


> and it's just "trust me bro there's nothing bad in it".

The developer should be very concerned about what happens if his system(s) are compromised and the attacker slips a backdoor into these binaries-- it will be difficult to impossible to convince people that the developer himself didn't do it intentionally. Their opacity and immediacy make them much more interesting targets for attack than the source itself (and its associated build scripts).

Saving a few seconds on the first compile on the some other developers computer hardly seems worth that risk.

And at the meta level, we should probably worry about the security practices of someone who isn't worrying about that risk-- what else aren't they worrying about?


Stan gives you the ability to do probabilistic reasoning. There is actually Tensorflow Probability (https://www.tensorflow.org/probability) which has a lot of overlapping algorithms, but isn't as mature and approaches some things differently.

The main difference is that with Stan you think in terms of random variables and distributions (and their transformations), while with Tensorflow/DL you think in terms of predicting directly from data. Stan lets model a problem with probabilities and do arbitrary inference, generally asking any question you want about your model.

There are many other interesting alternatives, e.g. http://pyro.ai/ which takes a yet another approach merging DL and probabilistic programming with variational inference. (Stan and TFP can do variational inference too, but I guess it's like Python vs JavaScript vs Ruby vs Java - all of them can be used for programming, but not the same way).


The next cut of Stan will likely use TFP as a backend. I think that PyMC4 will also. The Stan team wrote everything from scratch in C++ including their own autodiff code which many regard as quite a stretch in terms of long term maintenance. Since TFP executes on top of Tensorflow things like autodiff and many of the other performance concerns that take up so much Stan-dev time are already taken care of.


PyMC4 on TFP was the plan, but they made a recent announcement [1] indicating those efforts would stop, and instead, they would develop PyMC3+JAX+Theano.

[1] https://pymc-devs.medium.com/the-future-of-pymc3-or-theano-i...


Woah. Thanks for the link, as a PyMC3 user I was not looking forward to the transition to 4 expecting to have to relearn the API like the transition from 2 to 3. I was debating wether I should learn 4 or switch to a different library when all I really wanted to do was stick with 3.

Looks like I get the best of both worlds now.


Please no, we don’t need Stan to be rebuilt with a Python backend. That it’s built in C++ and can be called with higher level API’s is part of the appeal.


VIM, and overall just being fast. It is extremely perceptible, especially if you ever use something like Arch Linux + st (or one of the faster terminals). iTerm is slow as shit compared to that.


You mention you wrote most of the parsing yourself. Is there any reason why you didn't use something like LLVM to do the heavy lifting?


In that case you have a much bigger problem :P


> This is why interview code tests are... badly misguided. Most "fizzbuzz" screening is grossly artificial, denying one the feedback loops which are a critical element of productivity, and without which one is relegated to spending time manual checking what automation does almost instantly.

Do you really need to compile/run something like fizzbuzz? What if you're writing code that can't be run, such as when modifying a larger piece of code that won't run until all modifications are made?

Isn't there some value in being able to verify correctness of code just by looking at it for a few seconds? Surely you can overlook some things, but with practice probably a lot less than people think.


> High latency feedback forces you to be more methodical in your development and think about the changes you're making.

This! People rely on their fancy REPLs and super fast feedback loops and 1000 unit tests too much these days. What do you actually do when you can't run the code? What if you have to debug it just by reading it?

There's a lot to be said about being efficient with trivial changes vs being methodical and able to solve much complicated problems when they arise.


This is exactly the reason why I actually quite strongly discourage teaching programming by starting with IDEs. Far too often I see beginners fall into what I call "programming tunnel vision" where they repeatedly make very tiny and often random changes to a piece of code in an attempt to get it to compile or produce the right result, seeming to completely abandon any thoughts about the overall goal. A lower latency feedback only encourages this behaviour more. The same phenomenon also happens if you give them a debugger --- they spend plenty of time just stepping through the code without any good sense of the bigger picture. Maybe it feels productive, but it's not. Their attention is too preoccupied with the feedback that they do not think deeply enough about their solution, and as a result, overall code quality often also suffers.

Instead, I believe in thinking carefully about the problem. Close your eyes and visualise the program and its data and control flow in your mind, then write the code. Use a whiteboard or even pencil and paper to collect your thoughts and get a good mental model of what you're trying to accomplish. Block out all other distractions and focus on the problem.

Many others I've talked to are in disbelief when I tell them I can spend an hour writing several hundred lines of code that compiles and works flawlessly the first time, but this is what careful thought will allow. Even with a very fast feedback loop you may spend several times longer fiddling with the code until you get something that seems to work, but actually doesn't in all cases precisely because you did not ever think about those cases while you were fiddling with it and had your attention focused on getting that next dose of feedback.


I'm glad I found someone who shares my point of view. You're right about IDEs and debuggers.

> Instead, I believe in thinking carefully about the problem. Close your eyes and visualise the program and its data and control flow in your mind, then write the code. Use a whiteboard or even pencil and paper to collect your thoughts and get a good mental model of what you're trying to accomplish. Block out all other distractions and focus on the problem.

It's funny how many problems I've solved by writing code on paper/whiteboard when I got stuck doing actual programming. It's so much easier to focus on the problem when there's no code to run.

Another thing I found useful is just reading the code outside of an editor. Either by printing it out and scribbling over it with a pencil, or just reading it on a phone/tablet that can't run the code.

> Many others I've talked to are in disbelief when I tell them I can spend an hour writing several hundred lines of code that compiles and works flawlessly the first time, but this is what careful thought will allow.

I've been having the same experience. Recently at a uni we were given an assignment to write an interpreter for a rather simple imperative language (conditionals, loops, simple recursive functions and stack depth checking). We were given 3 hours to write a program that could interpret a sample program.

Most people struggled to get anything working at all during that time, since each and every one of them I talked to didn't have a clear picture of they were trying to build.

It took me a little over an hour to write the whole thing in almost a single pass, in a modular fashion with separate tokenizer, parser and evaluator with the necessary checks. There was no need to run the code, most of it was rather trivial, implementing simple state machines. It was quite a bit of code (over 1000 lines), but there was almost no thinking required if you knew how the parse tree should look.

In situations like this I'd even say it's hard to make the program not work if you're methodical, working step by step and checking if you've covered all the cases.


>>"I quite strongly discourage teaching programming by starting with IDEs..."

>>"...also happens if you give them a debugger..."

1). I assume you can cite no research supporting the idea that new programmers are better off with your recommendations?

2). Your idea doesn't seem to take into account that different people think in different ways. I believe this approach was good for you. But as far as we know you could be in the minority right?

For these reasons I don't think there is enough data to make blanket recommendations against IDEs and debuggers.


Are you genuinely attempting to argue that thinking ahead and fully understanding the problem isn't preferable to tweaking one's way to a solution?


"Thinking ahead" is often a great excuse to design an overengineered mess of a solution that can't be tweaked and doesn't really properly solve the problem either. To be pithy - see Java.

Sometimes, exploring the problem space can give you a fuller understanding of a problem faster by forcing you to confront pitfalls that may not be obvious until you try a solution. We use all kinds of wonderful terms for this - "Agile", "Prototyping", etc.

Both extremes - fetishizing planning and up front design, or fetishizing short term iteration and poking things without deeper thought - have their problems, and occur too often. Neither tool is a panacea, but both have their place.


There's a difference between "thinking ahead into the next problem", i.e. premature generalisation, and "thinking ahead into the details of the current problem".

It's good that you mentioned Java, because it is a language which I find extremely IDE-centric, and I suspect that's also what causes easy premature generalisation --- creating new classes with tons of boilerplate automatically generated by the IDE is so easy that it encourages programmers to. That doesn't help one bit with the details of the algorithm, unfortunately; it often gets "OOP-ified" into a dozen classes and much-too-short methods created as a result of the "fiddle with it until it works" mentality.


'Fiddle with it until it works' has to be done when you are working with a product that isn't documented well enough. If that mentality is used in general for programming it is bad, but there are some situations where experimentation has to be done to work out how parts the product work.


Java as a language is producing more value to actual businesses than most other popular languages. Where you see an over engineered mess, others see valuable abstractions, extensibility, compatibility and self documentation. Unfortunately, understanding this so called mess requires knowledge of the lingua franca of object oriented design which has fallen out of favour by the new generation.

I'm not saying that there are no unjustifiable over engineered java libraries, but the current hype cycle of web frameworks seem to indicate the burden of proof of good design should lie with current technologies as well as previous.


> Java as a language is producing more value to actual businesses than most other popular languages. Where you see an over engineered mess, others see valuable abstractions, extensibility, compatibility and self documentation.

"Everyone uses it" or "it's producing value" doesn't mean it's not an overengineered mess that everyone recognizes as such - it just means that imperfect code still trumps no code. Switching languages usually means tossing out your old codebase, leaving you at "no code".

I have worked on such messes, created such messes (oops!), and cleaned up such messes.

That said, I'm sure there is a Java project out there which actually benefits from stereotypical levels of Java abstraction and patterns - and I'm sure there's a few codebases out there where "my" and "others" opinions differ exactly as you say.

> I'm not saying that there are no unjustifiable over engineered java libraries, but the current hype cycle of web frameworks seem to indicate the burden of proof of good design should lie with current technologies as well as previous.

100% agreed - not that I'm qualified enough at web dev to have much of an opinion on this. If anything, the churn of web frameworks smacks of being both overengineered (do you really need a whole framework for that?) and underengineered (wait why are we replacing things yet again?) simultaneously.


Spring comes to mind as a widely used framework that benefits from those "stereotypical levels of Java abstraction and patterns."

But it's the exception rather than the rule. Once you have something like Spring in your codebase, to take care of modularity and reuse, everything else should be coded with as little "abstraction and patterns" as possible.


WhitneyLand is probably not arguing against the claim you make in the large, but against the unsubstantiated argument that using IDEs and debuggers is more likely to lead you to that style of thinking than high latency variants.

One could just as easily hypothesize that these tools let you avoid thinking in the small, and help you form a big picture overview that would otherwise be difficult to understand.


Those were not my words. I said to discourage all students from using IDEs and debuggers doesn't make sense.

I went quite a while with no tools other than a hex editor to type in op codes. I don't think it did anything except hurt productivity.

Maybe you learn or work better that way. I don't. And I don't see how you justify assuming all new programmers would.


Discouraging someone to use an IDE is more a symptom of the target language's shortcomings. Xcode provided me with beautiful compile-time errors for both Objective-C and Swift, and forced me to really think about what I was doing. Incidentally, I learned both languages from the IDE.

Would I recommend an IDE for a low level language like C? Probably not, because it forces a kind of laziness on the programmer.

Maybe an IDE isn't the solution, but a starting point to build upon. Something that's an interactive environment like LightTable has, where you can quickly eval blocks of code and see the end result without having re-compile your entire program. Certain languages are better suited to this, and certain paradigms (reactive programming comes to mind).


If they won't, I will. Working code (in a good language) is the best way to work on the problem, far better than a whiteboard where you have no undo, no VCS tagging, no ability to come up with reusable components... . Trying to do it all in your head would be even worse.

Of course it's possible to push code around on the page until it seems to work, just as it's possible to push symbols around the page until it seems to work when answering a mathematical question on paper. (Unfortunately some languages/compilers will run code that doesn't make any sense, but that's more true on paper, not less)


No, that was your interpretation.


I think the "compleat" programmer can move freely between the two extremes. I have one piece of code -- in Common Lisp! -- that I've been working on for a couple of months of weekends and still haven't tried to run (except for an occasional one-line experiment to check that I have the correct syntax for a macro).

But I can also adopt a much more interactive, experimental approach in situations where experiments are cheap and easy.

It all depends on the nature of the task.


oh, those 1000 lines of C...

If you think small code/functions are the just the result of "short term decision", that is wrong. Small and short doesn't imply ill-developed, short-sighted nor a lack of whole picture.

Small and well-thought code often contains an abstraction. And compiling such an abstract-level code takes even shorter time, still do some syntax check and (optionally) checks types. If you fix the abstract layer then you proceed to the details.

"Close your eyes and visualize the program"? Why not just draw the image as an abstract program on the screen? You know, Hackers and Painters is a real thing. Good abstract code describes itself on the screen, you don't have to imagine the behavior in your head. Code is much better than your volatile image in the head that may go away if you go to sleep.


I think when you have to resort to using debugger, probably it is better to discard the code entirely and rethink the solution.


I've only used a debugger a few times, when the program did not seem to behave according to the source code. That was invariably due to a third-party bug (compiler, library, OS) or to me failing to understand a subtle point about the language or library I was using.

But in general I agree with you.


If you have to break out the multimeter, it's probably just better to throw out that radio and make a new one.


I don't think this is a right analogy, because the code can't "go bad" on its own one day (as if some capacitor would go dry), unless you modify it to do so. Maybe when you don't have access to the source code and you want to see what is going on, use of debugger could make sense.


One thing that helps is having short, self-contained, composable pieces of code that are easy to run in a repl or compile. This also helps testing and general understanding.


And all of those choices are utterly horrible (IDEs, not editors).


What makes you say that?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: