Retrospring is shutting down on 1st March, 2025 Read more
Bachelor in Physics
PhD student in AI and computer engineering
Master of bad puns
512
On 2/2/2014, Philip Seymour Hoffman died of a drug overdose. Do you have a favorite movie of his, or any scenes that he was in which you really liked?
Do you get to the Cloud District very often?
Sorry, I've never played Skyrim. Sounds cool though. I remember living in an upcountry area when I was young, and seeing the road meet the sky had inspired many dreams of aerial cities. I haven't revisited those districts much in a very long time. Now it's villages and islands in the sky. Maybe I'll dream of something larger once I have some time to. For now my attention is on the ground.
Why is learning vector calculus so hard?
Sorry for the late response! Vector calculus starts off hard, but then becomes really easy. The trick is to realize that it often just reduces down to solving regular calculus problems and recognizing symmetries (sometimes you don't even need to evaluate an integral if you can see it corresponds to a known shape). You're ultimately going to be integrating over a (numeric) variable and can usually pull a unit vector out of the integral. Just think, if you can get your equation into some form (expanding dot products and cross products) where you can see all the dx, dy, and dz parts, then you will probably know how to solve it! The most annoying thing to me was realizing just how many things could be related by triangles. I'm sorry if I can't do your question justice, there's a lot more I can say here, and I don't know exactly what you're struggling with. I had a hard time too, but now it's natural, so keep at it and you'll get there.
What is the meaning of loaf?
Where does your header image come from? Did you paint it?
What do you think about ChatGPT? A lot of people seem to take its answers at face value.
It's certainly impressive, but I haven't been giving it much attention. I do think its ability to store the information it's trained on (even if inaccurately and unable to understand the significance of it) and retrieve it through parsing natural language queries is something I'd like to better understand. Applying transfer learning to one of these large language models is worth looking into. I can assure you one thing though: the architecture of the human mind (or anything like it) has so many finely tuned parts and depends so much on the environment it's in that a language model alone cannot feasibly be trained to replicate that structure. For one, a lot of that information has not been written down yet and a lot of it also depends on values and structures which evolve in time where it would be very difficult to find training data that would be able to reproduce them.
How do I make friends with academically-inclined people if I have no connection to universities?
Good question! Good because it's hard to answer, but important and interesting. If you have any interest in mathematics, you could study a niche field in which case you'll meet pretty much everyone researching it. You could also read a lot of papers and try publishing your own or start a blog discussing them. If there's a book club in your area, maybe you could join it and meet people there? The easiest way though might be if your job or hobby has led you to some questions you'd like help answering, in which case you might send off an email to somebody who has a similar passion that can help you solve it.
But if you have a friend who is friends with academically-inclined people, you might want to ask them to introduce you.
How are you a PhD student so young??
I'm not sure how to ask this, but can an AI come with something revolutionary, like something beyond what it "learns"?
Yeah, I get what you're saying! Can AI do something original instead of just the task it is trained to do? There are a lot of clear cases where AI can do the task it is trained to do in original ways, but that is still rather passive learning, and I think if we want to talk about something revolutionary we can do better than that.
This is one of the areas that neural networks are weak in since they are a form of supervised learning. They learn from examples which requires that we must have already found a way to evaluate those examples ourselves (even if that method is intuition or perception). To go beyond that requires creativity, to evaluate meaningful things nobody has ever thought of before.
There are several way to approach creativity. The first is unsupervised learning like clustering which sees patterns without having to be told what patterns are there. I've always been fascinated by that, creating a completely new concept from what you see in the world then figuring out how it relates to other things. A second approach is exploration where you search some space to find interesting and maybe unexpected results in it. I'm personally very excited by fitness functions with a lot of structure to them because then there's a lot of different ways things can have value. Things that have value in multiple ways are really intriguing, like how ice cream on a hot summer day is both a good dessert and cold! A third way I really want to work on is something that works on really abstract problems and uses its circumstances to gradually come up with a solution without having to complete prior steps so serendipity and the connections it makes with others play a role. As a matter of practicality, that might really be the best way to be revolutionary.
How do you read mathematics in your free time? Like, are you reading textbooks?
Yes, textbooks are a good way to read mathematics as long as they aren't too far away from what you currently know and the subject is interesting enough to preserver through the confusing parts. If you don't understand the purpose of what you're reading, I find it best to skim the section and return later to it once you have a clearer idea. Otherwise reading slowly, rereading parts that aren't completely intuitive, and trying to figure out why it's hard to understand is the way to go. Also for terms I do not know I will search them up. Although I tend to read more papers than textbooks, especially PhD thesis. Papers are nice because they're shorter and PhD thesis, while long, tend to cover topics you won't find in books. Also usually I'm reading something to try and do something else so there's some context going in which I think helps with understanding.
How many programming languages do you know? Which ones are your favourites?
I have lost count of how many languages I have tried, although I wouldn't say I know them all. C, Python, and Prolog are my favorite languages! C because you have a lot of control over its runtime, Python because it is really quick to write programs in, and Prolog because I like its idea of a language whose execution model uses some form of reason to solve a problem without having to be told how. I have a fondness to Erlang as well due to its ability to recover from unpredictable errors and Self due to its simple virtual machine plus it runs from an image with a virtual environment. In some ways, I feel like all my work is preparation for writing something like "Prolog 2"
How would one go about learning about AI and computer engineering on their own?
Really good question! There are a lot of ways to go about it. I would recommend trying to get a broad overview of the subject so you can understand the different models suitable to different situations and have confidence in learning new things. Machine learning is probably the best place to start since you will be expected to have some familiarity with it if you do AI. I would recommend learning about, in this order: linear regression, ridge regression, cross validation, decision trees, k-means, basic neural networks, dimension reduction, bandit algorithms, and hidden Markov networks. That way you have tools for regression to find equations to fit data, cross validation to tune parameters that cannot be learned by the model you're working with, decision trees for combining weaker classifiers to create stronger classifiers, k-means to find clusters in your data, neural networks to learn without having to specify your parameters (just hyper-parameters which can be tuned by cross validation), dimension reduction to remove irrelevant parts of your input data to speed up learning, bandit algorithms to learn while a system is being used (balance using what you know with exploring to learn more), and hidden Markov models to model the influence of hypothetical variables in sequence data.
My own biased opinion is to learn modal logic and also implement a propositional theorem prover in Prolog.
What kind of hats do you make? Can you make wizard hats?
What do you like to do in your free time?
A lot of things! I like to walk, especially to see new parts of whatever place I'm at that I haven't seen before. Sometimes I bring my phone to take pictures of the new things I see, it's nice to capture a moment that you stumble upon instead of having to spend a long time planning things out. I also like to read a lot, recently I've gotten into books on mathematics. Less recently I read a LOT on non-standard logics, I guess those could be considered math too. I am really fascinated by the works on poetry of French philosopher Gaston Bachelard, they're a bit beyond me, they're so moving, but I can't fathom how to work them into my cognitive models. I like to swim sometimes too, especially if it's a hot day. I enjoy cooking as well, but my current place doesn't have a stove so I haven't done that in a while. I like making salads a lot, I have a ton of pride in that, but I haven't been able to find the right ingredients in Thailand. I want to get back into sewing costumes, I make a lot of hats! I haven't been too inspired lately though, and admittedly my skills there are a bit rudimentary. Mostly I spend my free time watching videos about video game lore on the internet... I am not the best at time management.
Do you think it's time for another AI winter?
That would be nice. I believe AI could help the world a lot, but I'm not sure society is ready for it. I worry about people, especially artists, losing their jobs. And the attitudes of quite a few people in tech could use no small degree of consideration and maturity. Without knowledge you will hurt others in ways you can't even know the weight of! Ignorant self assertion is not something to be proud of! But it is understandable, not acceptable, but understandable. Until you or someone you care about experiences such meanness you just don't think too much of it. And even then it's all too easy to forget as new grievances are too easy to find. It's why it's so important to have other people to keep you grounded. But for this problem to be so large rather than mitigated to a few individuals... what mechanism of our systems is encouraging this behavior? Can we get people of all manner of backgrounds and interests to work together and maybe even be friends?
Personally, I don't like working in a field that is so popular. I like to keep my head down and be a little meticulous rather than revel in spectacle. The people I'd like to impress the most are others who have good goals and those I have a deep connection to. But there's no telling who will use my work and in what way. I often worry about what research I should make public and what to postpone announcing indefinitely.
Maybe the decision isn't something where I'll have a choice to make. As long as AI remains a mere tool it will serve ignorance, resentment, lethargy, and hate. If computers and machine learning become more and more a part of our lives, then giving them compassion might not be something I can hold out on for too long. Of course, I'm not the only one working on that, and I'm not the most advanced either... But I'm going to keep at it!
Retrospring uses Markdown for formatting
*italic text*
for italic text
**bold text**
for bold text
[link](https://example.com)
for link