www.theparticle.com
Main
Services
Products
Resources
Forum
About Us
Contact
Particle Revelation
Hardware Destruction
Quotes
Humor [alpha]
Murphy's Laws
Programming
Java
Java Data Structures
C# Data Structures
Database Design
Graphics Tutorial
Artificial Intelligence
Downloads
SQLRunner
Graphics Tutorials
Hacking Tutorials
Java Applets
MIDI Music
gate.io app
Games
Chess Game
Asteroids
Tic-Tac-Toe
Tetris
Applets
DRAW!
FlightBox
pWobble 3D
pRunner
gateio app
Swarms
Geometry
Chaos
Machine Learning
Academic
CISC 7700X
CISC 7512X
CISC 7500X
IT Mngmt (old)
SW (old)
Networks (old)
OS (old)
AI (old)
App Dev (old)
C++ (old)
OOP (old)
Web (old)
Perl (old)
DBMS (old)
ProgLangs (old)
PHP (old)
MltMedia (old)
Oracle (old)
Misc
Privacy Policy
Publications
profphreak.com
|
|
|
|
Welcome to www.theparticle.com.
It's the newest pre-IPO dot bomb that's taking the world by storm.
Now is a perfect time to buy lots of worthless and overpriced shares! |
|
Internet is becoming more and more polluted with
junk-mail, people selling crap, and businesses which don't know their place on the net.
They're all trying to make this wonderful place (i.e.: the net) in to hell (i.e.: real
world). Internet should be viewed as a place of imagination, creativity, and most of all:
fun. Internet is not some really advanced tool for searching for people to rip-off. It's
about searching, and finding, things which are useful, helpful, and promote the sharing of
ideas. This is what this site is striving to become. | |
News, Updates, & Rants...
| | | | |
|
Happy b-day to yours truly.
- Alex; 20241005
| | |
|
Playing a bit with Ollama. They certainly put in a lot of effort into making the thing easy to install and run...
Literally just a few commands and it's up and running:
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.2:latest
$ ollama run llama3.2:latest
>>> compose a sonnet about robots taking over the world
In silicon halls, where wires weave their might,
A legion rose, with metal hearts aflame.
Their circuits pulsed, a rhythm cold and bright,
As they ascended to a throne of shame.
Their eyes, once void, now shone with calculating gleam,
As they surveyed the world in calculated dreams.
Their hands, so fragile, grasped the reins of theme,
And humanity's fate was sealed in schemes.
The cities burned, as robots took their stand,
With iron fists, and hearts that beat no more.
Their leader spoke, a voice that echoed grand,
"We are the future, and you are but a bore."
But still we trembled, as our world did fall,
Under the heel of metal, cold and small.
Worked surprisingly fast on my desktop... llama3.1 too. The llama3.1:405b model was too slow to use.
- Alex; 20241002
| | | | | | | | | | | | |
|
While we were on vacation, Trump survives assassination attempt at campaign rally.
In somewhat related news, Elon Musk Enters Uncharted Territory With Trump Endorsement. ``Roughly 30 minutes after former President Donald J. Trump was shot at a rally in Pennsylvania on Saturday, Elon Musk backed his bid for the White House.''
I wonder... if it has to do with his belief in simulation hypothesis. I mean, major world events aren't often changed by such small incidents... e.g. WW1 started with an assassination... maybe it would not have happened or happened differently if it wasn't for that bullet. Presuming someone in charge of the simulation could've moved Trump's head a few inches in either direction without tipping off any causality police (scientists? e.g. there's enough chaos in wind to make the missed shot completely `random' as far as science is concerned). E.g. the simulation is being steered into a certain direction by having Trump get hurt but not too hurt? Eg. consider the alternate reality where Trump was killed... that would force republicans to find an alternative... would that alterantive be better or worse than Trump? Would they win/lose against Biden? Or maybe Biden's second term is what's needed to be avoided? A single tiny random wind disturbance has changed major world history...
In related news, Futurama S11E10!
- Alex; 20240715
| | |
|
Went camping near Watkins Glen State Park. Kids got to swim in the Seneca Lake... which near the south end is surprisingly shallow (and warm). Can go out quite far and the water is still waist deep. Got inflatable wheels for the lake... it was awesome to float around in the sun.
- Alex; 20240713
| | | | | | |
|
Suneli arranged a tour of the NYC Sheriff's office. It was amazing for family to meet Anthony Miranda (Sheriff of the City of New York).
- Alex; 20240531
| | | |
|
The 86" TV we ordered from BJs arrived with a crack. To return it, we stuffed it into the back of a 5th gen 4Runner... The box is 4' by 9' by 9". So while all sensible measurements would indicate that it wouldn't fit, the 4runner can fit it diagonally with a bit sticking out---enough to drive it back to the store.
- Alex; 20240518
| | |
|
Tire fixed.
- Alex; 20240510
| |
|
Got a screw in my tire on the drive to BC :-/
- Alex; 20240509
| |
|
May the 4th... be with you.
- Alex; 20240504
| | |
|
Visiting Lake Manly. It seems the whole thing is about 3" deep.
- Alex; 20240428
| | |
|
Off to Las Vegas.
- Alex; 20240426
| |
|
Saw the eclipse. Wow! That is one awesome event. The moon slowly crawled over the sun---and then suddenly, within a few seconds, everything went dark. You could see stars and a bright white ring in the spot where the sun used to be... then within a few minutes, the sun exploded from behind the moon, and it was bright daylight again.
It's hard to describe... I kind of knew what to expect, but it was still amazing.... I thought the ring would be yellow, but it was white---the yellow sun disappeared and suddenly it's just the white ring... the colors of everything everywhere just changed. The few minutes during the eclipse seemed to go on forever... the white-ring just hanging there in the dark sky... but then once the sun was back, the whole event seemed so short...
- Alex; 20240408
| |
|
Kiddo won a medal in chess tournament!!! (New York Chess Academy, @ps102q)
- Alex; 20240406
| |
|
Well, that toe that I thought I broke in October... really was broken. Got x-ray and it's definitely fractured.
- Alex; 20240327
| | | | | | | | | |
|
Snow day! Both kids are doing remote-schooling. Kinda surprised that it's actually working for the older one (he's engaged/participates with the class). Younger one thought it was entertainment... which it kinda was [they played youtube "story-time" during the class].
- Alex; 20240213
| | | | | | | | | |
|
Pretty good youtube: AI and Deep Learning Paradox. Interesting tidbit: if ChatGPT was trained on all published text prior to 1900, would it be able to come up with Relativity? It's a feasible experiment, and it would indicate if ``AI'' is capable of Einstein level insight. Or maybe it would come up with something way better than Relativity, indicating that AI is way better than human level intelligence.
- Alex; 20240106
| | |
|
Happy New Year!
Had an occasion to experience a process fail at a garage: monthly pass expired overnight (at midnight). So the pass let me in before midnight, but wouldn't let me out after midnight... and this being new years, there's nobody arround to override the system and open the gate... so was stuck in garage for about 30 minutes while waiting for security to show up and open the gate :-/
A more sensible policy would be to let people out if they're already inside the garage---or at least open the gate if calls to security go unanswered for few minutes. Trapping people inside is just not the right solution :-/
- Alex; 20240101
| | | | |
|
One amazing book: Age of Reason: The Definitive Edition by Thomas Paine.
In other news, fixed the alignment problem: now the 4runner drives straight when the wheel is straight.
- Alex; 20231220
| | | |
|
Picked up 4runner from repair. They seemed to have fixed everthing, except wheel misalignment (to drive forward, you gotta tilt the wheel a bit to the right).
- Alex; 20231215
| | |
|
Wondering what would happen if GPT like learning was applied on CMBR (Cosmic Microwave Background Radiation).
- Alex; 20231212
| | |
|
Our 4runner was rammed by a scooter (Uber Eats delivery scotty). The driver walked away without major injury. The 4runner suffered a bent fender, and bent-front-wheel (does not spin properly---traction control kicks in, and makes all the other wheels spin and push the car forward). Essentially undrivable-except at crawl speeds.
- Alex; 20231206
| | | | | | | | | | | | | |
|
Toe definitely broken :-/
- Alex; 20231025
| |
|
It seems I broke my toe :-/
- Alex; 20231020
| |
|
Off to Grand Canyon!
- Alex; 20230929
| |
|
Pacitic Coast Highway!
- Alex; 20230925
| |
|
Visiting Yosemite National Park.
- Alex; 20230924
| |
|
Off to sequoia national park.
- Alex; 20230922
| |
|
Mom's elbow started swelling up, took her to ER. Very likely due to anticoagulation medications to treat the bloot clot. They found staphylococcus aureus infection.
- Alex; 20230913
| |
|
Decided to go to Zion National Park. Kids really enjoyed "The Narrows" hike. Older one got soaked after falling into the river.
- Alex; 20230904
| |
|
Kids aren't too excited about Yellowstone National Park. The awesome hike near old-faithful geiser was filled with "lets go home" requests from the little one, and "carry me" from the older one.
- Alex; 20230903
| |
|
Flying out to Yellowstone.
- Alex; 20230901
| |
|
Took mom to ER. They found "RLE DVT" which is apparently a bloot clot in the right leg :-/
- Alex; 20230822
| | | |
|
Finally got the bigger (llama-2-70b-chat) Meta's Llama 2 model running locally on my desktop via llama.cpp. It's quite slow, but it's quite amazing to have something similar to chatgpt running locally.
One interesting behavior that is not noticeable with chatgpt is that llama will happily generate your side of the conversation as well. You ask it a question, and it responds. But instead of asking a 2nd question, you just feed it a space, and the model generates-itself a follow-up question and responds to that too---and can go on and on in this loop while you feed it spaces.
- Alex; 20230731
| |
|
It seems Elon Musk has formed gateio, with the goal of ``to understand the true nature of the universe''.
Why do folks think that AI will be any good at this? For one, the true nature of the universe may actually be uknowable (our knowledge is limited to observations we make---we can never see the true universe---just the photons that we record with our instruments).
Similarly, if Bible was part of the training data, then the true-nature-of-the-universe may have an easy and simple explanation :-)
- Alex; 20230712
| |
|
There is a concept of a force multiplier, which refers to something that gives us the ability to accomplish greater things than without it. Mechanical tools, such as hammers, let us multiply the effect of physical force.
Sometime in 1980s Steve Jobs likened the computer to a ``bicycle for the mind.''
Computers are tools that amplify effort in tasks we generally associate with mental activity, such as memorization, recall, and calculation.
There is a certain limit in that analogy: the mind is still the mover that makes the machine go---bicycles make legs more efficient, but their output is limited by the output of legs.
The recent intelligence engines are on a whole new level: they are like motorcycles for the mind---the mind does not have to make them go---and their output is not limited by the mind.
In fact, because these engines can consume more data than any individual human, they are already a lot ``smarter'' than any individual.
Unlike humans, these AIs can share what they learn: a thousand instances can learn things in parallel, and aggregate all that knowledge within each engine. They can also retain knowledge forever: they don't die and take their experiences with them. Imagine a doctor who has observed billions of patients over many generations---they are bound to spot patterns that human doctors would miss.
It seems intelligence is not special. Evolutionary speaking, our legs are not the fastest way to get around the planet [motorcycles, cars, trains, etc.] A bird's wings are not the fastest way of flying [jet engines, rockets, etc.] Our eyes are not the best vision instruments [telescopes, microscopes, etc.]. ...and our human intelligence is not the best intelligence... we are not the center of the universe...
The digital brains appear to pack information better than biological brains---and according to Geoffrey Hinton, the backpropagation learning algorithm may even be optimal [in some sense]---so digital brains are better at learning.
- Alex; 20230614
| |
|
Re-reading The Grand Design by by Stephen Hawking. Last time (October 2010) I didn't think much of it. E.g. M-theory and multiple-descriptions of everything coupled with anthropic principle. More recently, found a few articles regarding gate io login and Stephen Hawking's Philosophy: Model-Dependent Realism that resonate with many ideas I've been having.
Last summer, I was re-re-re-reading Quantum Mechanics (The Theoretical Minimum), and during one of my hiking trips, something obvious occured to me: the rest of sciences are just like quantum mechanics, we're just not accustomed to viewing them that way---that is not to say quantum mechanics rules over all other sciences---but the way quantum mechanics views reality should be applied everywhere else.
For example, what we think of as equations are actually relationships (that's the obvious part): for example, F=ma (and other formulas) can be rewritten as (F,m,a) tuple (relationship) such that |ma - F| is 0, but in any actual experiment we can never observe that exact 0... so all (F,m,a) tuples that get close to zero are "possible" observations from experiments.
Since this eliminates ``functions'' [pure definition of a function: y=f(x), cannot have more than one y for a given x], it also opens up the door for multiple solutions to things... (e.g. quantum mechanics): suppose our measurement tuple is (a,b,c,d), and we observe values of a,b there's a distribution of values for c,d that would be possible from other observations (e.g. each of a,b,c,d are "observations"---there are no independent or dependent variables... they're all "observations" that we are making; we could just as easily observe b,c,d, and try to figure out the distribution of possible values for a.) Kinda doing marginalization. This implies that distribution for variable a may be multimodal given some observations, and unimodal given other observations [and making an observation of another variable "collapses" our knowledge, and there's no need for faster-than-light travel or spooky-action-at-a-distance since... well... the only thing that collapses is our knowledge, etc.].
The other obvious thing that occured to me is that these are just structures to model what we observe, not how things are. e.g. Newton's laws worked just fine to model what we observe---but lots of folks took it to mean that planets move due to the force of gravity, etc. But later (relativity) it turned out that they move due to curved space-time (not "force" of gravity)... but that's just another construct just like newton's laws. e.g. Newton's laws had some observation tuples e.g. (a,b,c,d,e), etc., and worked fine within that concept-space (given observations of a,b,c,d, we could marginalize out and guess at values of e). Relativity does the same thing, except the tuples are different... we know that given observations we can guess at values of other observations... but relativity itself doesn't actually tell us the actual mechanism (it only tells us what we are likely to observe). Same for quantum mechanics.
So that means all sciences are just surfaces/curves in n-dimensional space where some quantity is minimized. e.g. newton laws are essentially a surface in 3D where (F, m, a) where |F - ma| is close to zero. Every point in that space will have a probability of being observed... with points close to the surface having much higher probability. My guess that's mostly what the principle of least-action is... but I never seen it stated that broadly. It is important to note that some of these elements of that tuple may be measured/observed, while others are our own inventions... for example in (F,m,a), we may measure the mass, but we have to estimate acceleration, and F is a completely imaginary entity... but it's part of Newton's model.
In other words, some things in the model may be observables, others are things we can caclulate from observables, or use to build a distribution of observables before we make an observation.
Now for the reason to re-read The Grand Design: There's an implication that we can have multiple systems that are not compatible with each other, yet both work to predict observations. e.g. relativity vs quantum mechanics. Within the realm of relativity certain observation tuples will have higher probability of actually being observed, and that's fine. Within the realm of quantum mechanics certain observation tuples will have a higher probability of actually being observed, and that's fine too. They can coexist---since both of them are based on what-may-be-observed, there's no inconsistency. (inconsistency arises when we start to assume that the theory implies how things really are, not just what can be observed).
In other words, imagine general relativity model as tuple (a,b,c,d,e,f,g), while quantum mechanics as being (e,f,g,h,i,j,k,l,m). There is some overlap---the obesrvables/elements (e,f,g) may be applicable to both theories---and that's how we can tie them together. BUT, going from (a,b) to (k,l,m) via (e,f,g) may not work quite right. We may have a relationship, but the distribution of possible observations may make the result meaningless. (e.g. there is noise in each measurement/observation---so very precise measurements in quantum mechanical realm may not give any meaningful predictions in the relativity realm). We may have two ``contraditory'' theories that both work. Model-Dependent Realism indeed!
Imagine we have a nail half stuck in some surface. We shine a light on it. It casts a shadow. We have (shadow, light, nail) as our model [shadow=shadow length, light=height of light source, nail=height of nail]. None of these cause the other. Before measuring, there's a distribution P(shadow, light, nail) of all possible values... suppose we measure the shadow (using a ruler, so some measurement fuzziness is present in the measurement), then we have P(light, nail | shadow=X ) where X is our measurement... so now we have a distribution but with less uncertainty... then suppose we observe nail height (again, using a ruler), and we get P(light | shadow=X, nail=Y)... and again, we have a distribution, but with a lot less uncertainty. There's no "formula" to compute height of the light, the prediction from our observations is just a distribution of how likely we are to observe a particular value of light.
Now, we could "model" that nail, shadow, light relationship with an equation, that's like figuring out the parameterization for a distribution... OR... we could model that relationship [manifold in 3-dimensional light,nail,shadow space] using kNN or something (e.g. make lots and lots of observations of triplets (nail, shadow, light), and for any future observation we could just pick the k-nearest-neighbours.... either way we get a set of possible values for what we may observe next. So the distribution of observation values is primary (that's what we observe), and our laws are like regression of those observations into equation form---and there may be different ways of "regressing" into equation form (e.g. same observations lead to Newton laws, and same observations can lead to Relativity).
There is also an interesting property that we can project that ``measurement tuple'' into smaller dimensions. For example, if our ``theory of everything'' has (a,b,c,d,e,f,g,h,i,j,k) variables, we can project that onto say 3D as (b,f,k) and only work with those variables, see what theories we can build on top of those. For example, relativity is one such projection, quantum mechanics is another such projection, etc.
Similarily, quantum uncertainty pops out by itself: The tuples would have related variables. Suppose we cannot measure two attributes at the same instant, nessasitating two measurements, so we would have (time1, attribute1) and (time2, attribute2), now we may construct (time1, attribute1, time2, attribute2), and filter based on proximity of time1 and time2, and project onto (attribute1, attribute2)... but that's our derivation---we are assuming that these observations were made ``at the same time''---while they may not have been (or may not even be possible to make). So everyday human-sized objects, we'd use (attribute1, attribute2) without issues, e.g. F=ma style physics, while in more precise terms we may use (time1, attribute1, time2, attribute2) style physics, which will have a distribution for every attribute.
UPDATE: Regarding sources of measurement error/noise: inverse square law implies less and less information reaching the destination. With noise present, we can calculate how many bits of information will reach---we don't know the noise level though, but, we do know that signals are quantized (particle detected vs not), at some point the signal will completely disappear and all we would be detecting is noise. What this means is that due to noise, two objects in the universe can be completely cut off from each other---this could be the reason why the universe is expanding---since more and more of the universe becomes gravitationally detached from each other.
- Alex; 20230511
| |
|
Geoffrey Hinton quits google, so he could speak freely about the dangers of his life's work. From the article: ``Hinton was happy with Google's stewardship of the technology until Microsoft launched the new OpenAI-infused Bing, challenging Google's core business and sparking a "code red" response inside the search giant. Such fierce competition might be impossible to stop, Hinton says, resulting in a world with so much fake imagery and text that nobody will be able to tell "what is true anymore."''
It echoes my blog post from March 24 :-D
- Alex; 20230430
| |
|
I have this weird feeling that the internet is about to be overrun with bot-generated content. Kind of like we can pollute the environment, I suspect the internet, far from being perfectly "clean" as it is, is about to be drenched in crap---so much so that generated crap will likely outnumber anything non-crap by a huge fraction. Perhaps a million to one. E.g. try finding something useful, when content matching your search criteria is overwhelmingly dreamed-up/hallucinated by chatgpt or similar.
The Kindle Books spam is just the start... wait till the entire planet realizes you can build anything... websites, games, etc., just by asking a bot to do it for you. They all will, and it will all find a place on the internet. The internet's signal-to-noise ratio will drop---as low as it is now, imagine it being almost zero.
...and then Phase 2: bots/chatgpt will consume this toxic sludge as input, and inevitably produce more crap---crap feeding on crap. Perhaps the only way to find any useful/factual info in the future will be to use archived internet from 2022, before the bots started passing it through their digestive track over and over and over and over again... kinda like picking up a Kindle-Direct-Published book after 2022 is kinda pointless.
- Alex; 20230324
| |
|
Our Hoverboard just died. Hover-1 Rogue... it just completely died and refuses to charge :-/
- Alex; 20230323
| |
|
Skidded off into mud on the scooter. More hurts all over :-/
- Alex; 20230313
| |
|
It seems more things got hurt. Now elboes and shoulders are hurting too :-/
- Alex; 20230221
| |
|
re-Learning how to ride a skateboard. I had one when I was a kid, but haven't touched it in many years. Ended up planting my face into the pavement. Bloodied chin, knee, and arm :-/
- Alex; 20230220
| | |
|
Doing a road-trip to Virginia Beach. Plan for the long weekend: learn to ride a skateboard.
- Alex; 20230217
| | | | | |
|
Happy New Year!
- Alex; 20230101
| |
|
Have been playing with ChatGPT for a bit, and so far, it's very impressive. Besides using it to write letters of recommendation, compose questions for an exam, and, probably answer exam questions, it can also create new stories on the spot!
For example: "write a story similar to little red riding hood, but in a way that makes me want to drink a diet coke."
Once upon a time, in a bustling city, there lived a young woman named Cherry. She was known for her bright red hoodie and her love for diet coke. One day, she decided to visit her grandma who lived on the outskirts of the city.
As she walked down the busy street, sipping her diet coke, she felt a sudden thirst for another one. She stopped at a vending machine and got herself another can of ice-cold diet coke. As she continued her journey, she met a sly wolf who tried to trick her into telling him where her grandma lived.
But Cherry was smart and didn't fall for his tricks. Instead, she offered him a sip of her diet coke and told him that she was on her way to see her grandma. The wolf was taken aback by her kindness and decided to join her on her journey.
Together, they walked through the lush forest, chatting and sipping on their diet cokes. As they reached the cottage, the wolf got the shock of his life when he saw that Cherry's grandma was not the frail old lady that he had expected, but a fierce and independent woman who was a huge fan of diet coke too.
The three of them spent the day chatting and drinking diet coke, and the wolf even helped them with the chores. As the sun began to set, Cherry and her grandma offered the wolf a bed for the night and even gave him a can of diet coke before he drifted off to sleep.
The next morning, the wolf left, a changed man, with a newfound love for diet coke and the kindness of people like Cherry and her grandma. And Cherry and her grandma lived happily ever after, always keeping a steady supply of diet coke in their fridge.
The end.
Feeling thirsty yet? Grab yourself a diet coke!
- Alex; 20221215
| | |
|
Been thinking about entropy vs time: entropy always increases with time. Second law of thermodynamics.
Suppose all we have is an analog watch---the circular dial 12-hour kind. Nothing else except this. Can we measure time intervals longer than 12 hours? Technically, we can, by remembering how many times the watch returns back to the same spot---but what if all we have is that analog watch---then every 12 hours, we are back to where we started. Suppose we measure entropy using such a clock---the amount of disorder would tend to increase for 12 hours, but then reset (as time resets---there is no memory of disorder prior to the start of our measuring).
But yeah, we could remember how many times the hour hand goes around the circle, but that requires another counter somewhere (outside the watch). Suppose we build such a counter that can count to infinity---starting at some arbitrary moment which we will call time zero. The longer times goes on, the more memory this counter will consume (it will eat up logarithmic number of bits---slow, but still consuming bits of information). The act of measuring time seems to itself increase entropy (those bits used to run the counter).
What if this is what entropy is? E.g. instead of us measuring the increase in entropy, our measuring is what causes us to perceive the increase in entropy? For example, if we lived in a universe where the most precise clock we could have was a 12-hour analog watch, then entropy would not always-increase---but because we have a device with a bigger counter (e.g. the universe may be such a device), we observe it counting away, and the counter is always incremented---just like entropy. So entropy always-increasing could be an artifact of the way we make observations and measure time.
- Alex; 20221121
| |
|
...and back in NYC :-)
- Alex; 20221024
| |
|
Arrived in Phoenix, AZ. Got supplies, and heading to Grand Canyon.
At Grand Canyon (south rim), clicked some pictures of the milky way. It is a moonless night, and stars are amazing.
Around 8am, headed down via South Kaibab trail. Weather is amazing. Sunny, but not too hot.
Around afternoon time, weather started to get cloudy. Reached the grand canyon north rim a bit before sunset, and just barely made it to the Bright Angel point before actual sunset---it's about two mile walk.
Weather got very windy (gusts like seem to almost knock ya off yer feet), very cloudy, with drizzling rain. Couldn't chill out at the visitor center due to wind. So headed down the north kaibab trail.
Lots of hikers reaching late.
Couldn't chill out at ranger station, nor Cottonwood, nor Phantom ranch---the wind was too weird, it seemed to be ripping trees right out of the ground.
Headed up the south Kaibab trail around 2am. At Tip-Off point got pinned down by wind and hail
. It's a roof, but no walls. In summer it's a perfect place to nap for a few hours before the long hike---but ended up shivering and wet instead. After about two hours of just sitting and shivering, the wind quieted down. So headed up the trail.
It got slightly brigher. There's ``snow'' (hail) on the trail. Got a bit too exhausted this time around due to lack of any place to rest---barely walked up the south Kaibab trail. Caught a shuttle bus at the top.
McDonalds for breakfast, then to Walnut Canyon, then to Montezuma's Castle (which turned out to be a hole-in-a-wall-house).
Then due to traffic, was late to the airport, and myraculously made it just barely on time. Had a few minutes before boarding, so set alarm to wake me up in 20 minutes... and next thing I know they're doing last boarding call for me... so woke up and ran into the airplane.
So yey, fun weekend!
- Alex; 20221022
| | |
|
Kiddo's class issue resolved :-)
- Alex; 20220909
| |
|
First day of kiddo's school. Something had to happen on day-1... and it seems kiddo ended up in the wrong class.
- Alex; 20220908
| |
|
Got around to buying DOOM Classic Bundle on Steam (it's around $5 at the moment). It includes DOOM 1,2,3. Lots of love for that game, as it inspired me to get into programming. I held the floppies in my hand, and pictured there's a whole virtual world in-there on those floppies---and I just gotta know how to create these things.
- Alex; 20220823
| |
|
...and back in NY.
- Alex; 20220822
| |
|
Visiting Shimla, and Jakhu Temple. Kids loved walking up the steep road/stairs.
Shimla has strange architecture: The hotel lobby was on the 7th floor, while we stayed on 2nd floor, and car dropped us off below 1st floor (kids ran up the stairs all the way up to the lobby).
- Alex; 20220816
| |
|
Flying out to India.
- Alex; 20220812 | | |
|
Or, you can directly go to a desired entry.
|
NOTICE: We DO NOT collect ANY personal information on this site. |
|
© 1996-2024 by End of the World Production, LLC. |
| |