16122 construction cir e

Rooster (Chapter 3)

2023.03.27 04:28 IanCRBCannon Rooster (Chapter 3)

A New Friend
Though the sting of sweat in Titus’ eyes aided in the malignant feeling of exhaustion, the wind would occasionally coat his face with subtle relief. The junkyard was loudly quiet. The only sounds to break the tension of nothingness are the sounds of howling wind and occasional scream courtesy of the mentally ill or victimized inmates. When no companion is present, and when no settlement is in sight, the silence leaves room for nothing but thoughts to swim around in one’s mind. In the case of Titus, the thoughts inhabiting his mind are the words of Sloan, “Walk till there’s no more fence.” With this thought in mind, Titus would turn around every few minutes to see what he could see. While not plain as day, Titus could still see a rough image of a fence. Despite attempting to count his steps to figure out how long he had been walking, he lost count around step four hundred. Titus stopped walking, worried he would faint. As he sat on the ground, another thought entered his mind. These mountains of trash and old artifacts seem to have been pushed in a certain way to allow the forming of simple roads. As he examined these roads, Titus noticed a sign in the shape of an arrow that read “Preachers Town”.
After resting for only a minute, Titus lifted himself off the ground and followed the arrow.
After walking 200 steps Titus arrived at what looked like a town. The town boasted three buildings, constructed of plywood, and patched up with cardboard. The only other visual that stood out was the burned-down shack. Titus noticed the building Nearest him had a sign on the front door that read “McTell’s”. Titus walked in and was greeted with stares from the three people that sat in there. The room looked like a bar from the mid-180s but was mostly made from strong cardboard and plywood building. The ceiling had a consistent shower of leakage, with buckets placed throughout the floor to contain the water droplets. “What brings you in, stranger,” said an African American man, wiping a glass cup with a cloth behind the counter. Titus began to ask, “Are you M. C. T. E-”, The man interrupted, “McTell?” Titus then shrugged. “Is that the guy’s name on the sign?” said Titus. “I can see they didn’t just throw you in here because we share the same skin color. No, I’m not McTell.” The tilt of the head and the bend of the eyebrow from Titus indicated he was confused. “Throw me in here for sharing your skin color?” Titus asked. The man walked over to Titus and gestured at the nearest seat. “Take a seat friend.” The man and Titus sat at a table together. “I assume you were greeted at the front gate by someone, correct?” asked the man. “Yes” answered Titus. “Did he tell you why you were there?” he asked. Titus paused for a moment to collect his thoughts on the man at the gate. “I think something about intellect and not looking like I’m supposed to,” said Titus. “It seems that you don’t understand what’s going on, or you haven’t stopped to think about it.” After saying those words, the man gestured at a Caucasian person sitting at the bar to come to sit with them. “This, right here, is my friend Jason. If you notice, his skin is a little lighter than mine, and people would call him white. My skin happens to be a little bit darker, and most people would call me black. This is called a race. All through history people like me have been mistreated because of their skin tone, or race, that they have. It just so happens that you also have that darker skin color and the government decided one day we shouldn’t be allowed to live the lives that white people lived. Instead, what they did is they took people that look like us, along with people called gay, mentally challenged, disturbed, and criminal, and they restricted us to this big junkyard. But this isn’t just any junkyard. Before humans were cast out here, the government used this place to dispose of any historical or art-related items. Statues, pianos, paintings, and many more things were thrown in this place, so the masses could stay uninspired and loyal to the government.” After hearing all of this, Titus looked down at his hands and arms, viewing himself as if he had just noticed his physical body for the first time. A mixture of confusion and resentment built up on his face as he clenched his fist. He took a deep breath and said, “Thank you for telling me, whoever you are.” The man spoke with a smile “My name is Son.” Titus then asked, “If you’re not McTell, then where is he?” Son laughed and said, “Luckily, he’s dead. That evil bastard had it comin’. He made fun of a guy for thinking their broom was their wife. McTell called the poor guy’s ‘wife’ ugly and so the man strangled him.”
Just after their conversation came to a close and they both sat in the silence of the moment, the sound of the main bar door could be heard swinging open with rage. “I’m looking for McTell.” Spoke a hoarse, deep voice. Titus darted his head toward the door, along with everyone else. The man was a skyscraper amongst the rest of them. Dressed in steel-toe boots, bootcut denim jeans, and a straw hat, the man surveyed the room looking for McTell. “You’re about five months too late,” Son said, with confidence in his voice. “Well, dammit, someone is paying for my brother’s death. Billy and I were thrown in here together and we were gonna stay together, but your boss roped him into this criminal behavior. McTell’s crimes were gonna catch up to him one of these days, but I regret that it wasn’t me that finally got the jump on the guy.” The man lowered his hand down and grasped at a gun that was at his hip. The man spoke, saying “But as I said, someone is paying.” Almost simultaneously, The man drew his gun on Son and Titus stood up, putting himself between the gun and Son. “Shouldn’t we stick together?” Asked Titus. The man, still with the gun pointed at Titus, tilted his head in confusion. “We all were thrown in here together. If we want to make something of this place, we need to work together.” Said Titus, with sweat waterfalling down his face. The man took a deep breath and said “Maybe in another life, but he was my brother, and a murder is still a murder.” A shot rang out and, to the shock of Titus, the man fell to the floor. Titus swung his head to Son and saw a smoking revolver in his hand.
After Son and a few patrons cleaned up the damage and the body from the short gunfight, Son approached Titus. “I’m not the most humble guy you’ll meet, but even I can see that the whole confrontation would not have gone my way without you stepping up and distracting him, and giving me enough time to take my chance.” As Son said these words Titus began to form a response, but Son interjected and said “You know I’d like to repay you. A lot of people in this wasteland like to keep moving and not stay tied down too long in one place. They figure it’s better to keep to yourself and it’s hard to keep to yourself when you live in one spot where people know who you are. But, maybe if you want to stay a night, there’s a room right by the back door. It’s got a bed and some clothes if you need them. She’s all yours if you want it.” Titus mustered up a shy, half-smile and a nod. Son walked him to the room. The room, like the rest of the saloon, was dirty, old, and leaky, but it was comforting to the man that couldn’t recall the last time he had seen a bed.
It was the dead of night when Titus was awakened violently, not by a person, but by a sound that was unfamiliar to the frightened sleeper. Unknown to Titus, he was hearing music. Though unaware of this term, Titus began feeling warmth in his without knowledge that this sensation was a response to the music his ears were taking in. All at once, sight, smell, and taste became wholly irrelevant compared to the sensation of hearing that overtook Titus’ emotions. He began following the sounds, which lead him out the back door. What he saw baffled but intrigued him, a bright orange guitar leaning against the wall and a black, circular object, spinning on a machine that kept emitting music. The words stood out to the, now calm, Titus. The music would repeat the word “Rooster” and as Titus heard the word he looked over at the guitar and approached it. He sat on the ground with the guitar in his lap and a smile grew across his face. He seemed to enjoy the way the guitar fit in his hands, though he didn’t know what to do with it. As he sat by the music player he surveyed the guitar, and with a passionate smile he said, “Rooster”.
submitted by IanCRBCannon to u/IanCRBCannon [link] [comments]

2023.03.27 04:16 filthyflarp I've been playing arena for years...and years.

I've been playing arena for years...and years.
And it finally happened. Sorry if this is low effort/shitposting etc but I'm really quite chuffed and wanted to share with you wonderful people.
Thanks to MTGgoldfish and CGB as I am a trashy pleeb who only plays commander in paper.
submitted by filthyflarp to MagicArena [link] [comments]

2023.03.27 04:00 AutoModerator Weekly Discussion Thread; Upcoming News; ICYMI [March 27, 2023]

Amapá Iron Ore Mine, Brazil [KDNC] [Flair]

Sonora Lithium Project, Mexico (Ganfeng Joint Venture) [KDNC] [Flair]

Hastings Technology Metals, Australia (ASX:HAS) [KDNC] [Flair]

Evergreen Lithium, Australia (ASX:EG1) - [KDNC] [Flair]

European Metal Holdings, Cinovec, Czech Republic (LSE:EMH) [KDNC] [Flair]

Passive Equity Investments - Notable Developments [KDNC] [Flair]

See the above menus for company links and announcements - noting that additionally Macarthur holds 20% of spin off Infinity Mining ASX:IMI

Analyst Reports and Recommendations [KDNC]

Date Link Who SP Summary / Price Target
2023-02-28 Kemeny Capital Kemeny Capital (investment research) 12.9p "Cadence Minerals has built a robust portfolio of base and battery metals with ample opportunities for shareholder value creation. Recent developments have helped to substantially increase the potential of two of the group’s asset positions, while adding to the overall corporate valuation. Our sum-of-theparts (SotP) indicative fair value is 43.6p."
2023-02-02 W H Ireland Research W H Ireland (broker) 14.5p WHI View: Our assessment is that the Amapá mine could be company-maker for Cadence. We are firmly of the belief that the current market cap is more than covered by the legacy investments that Cadence holds in various new technology metal companies and projects and that the addition of the Amapá stake to its portfolio can only be value enhancing. In our opinion, the development of Amapá will be transformational for Cadence and we see fair value at 71p/sh with plenty of upside potential.
2022-07-24 Reddit Post u/EV-BULL (private investor) 10.75p Due diligence short term price target: 60p-80p; 1-2 year price target: 130p; Long term bull case price target: 300p+
2022-07-02 Reddit Post u/Observer842 (private investor) 10.4p Due diligence Near term: £37.5m (21.5p) - £92m (53p); Medium term: £160-420m; Longer term: £1-2B+
2022-02-18 Daily Mail Anne Ashworth for the Daily Mail (Journalist) 20.4p "The white gold rush appears to be an inviting prospect. But if you want to join, remember that fortunes are far from guaranteed in any foray into commodities. Options include the Aim-listed businesses Cadence Minerals and Zinnwald Lithium."
2022-02-18 Edison Group Edison Group (investment research) 20.4p QuickView report
2022-01-01 Daily Mail Justin Urquhart Stewart (fund manager) 28p Top pick for the Brave in 2022

ICYMI [Previous]

Date Article Comment & Quotes
2023-02-28 Cadence Minerals - value waiting to be unlocked - SoTP indicative fair value 43.6p 6 page analyst Research report from Kemeny Capital which can be obtained for free by qualifying investors.
2023-02-02 WH Ireland Research Report: Cadence Minerals - Developing Amapá iron ore mine and strategic new technology metal interests (sees fair value at 71p / share) To obtain this 34 page Analyst Research Report for free, (optionally) first register for Research Tree, then whilst logged in, register for WH Ireland (optionally) using your Research Tree account.
2023-01-06 Progress at Amapa, Lithium, and Rare Earths with Cadence Minerals Most recent of the interviews with CEO Kiran Morzaria.
2023-01-04 Vox Markets Podcast: Cadence Minerals updates on Amapa
2023-01-03 Completion of PFS on Amapá Iron Ore Project Post-Tax Project NPV10 US$949 million, Internal Rate of Return of 34% and a Project Maiden Ore Reserve Estimate of 195.8 Mt (Cadence attributable of 58.74 Mt) at 39.34% Fe Declared
Anything missing or incorrect? Let us know in the comments or LSE chat - the mods.
submitted by AutoModerator to CadenceMinerals [link] [comments]

2023.03.27 03:36 waynerad $1.7 billion has been invested into indoor agriculture, aka vertical farms

$1.7 billion has been invested into indoor agriculture, aka vertical farms, as of the beginning of December 2022. Now investment money is drying up and the bubble is popping. Fifth Season, AppHarvest, AeroFarms, and Agricool are going out of business, and Infarm and IronOx are in trouble.
"In theory, there are enormous benefits to indoor farms. They often use 90% less water than traditional farms." "Growing inside can avoid outbreaks of E. coli and diseases like a plant virus that recently devastated lettuce grown in California's Salinas Valley." "Indoor farming also eliminates pesticides and reduces fertilizer and keeps it out of rivers. Lettuce grown near Boston or New York City can avoid traveling thousands of miles from Western fields."
But... construction costs and electricity bills.
You can count me among the people who thought vertical farms would be more economical than apparently they have turned out to be. Traditional greenhouses are tough competition, as it turns out. The Netherlands, despite its tiny size, is a major food exporter, and massive use of greenhouses has a lot to do with it.
submitted by waynerad to u/waynerad [link] [comments]

2023.03.27 03:32 hackinthebochs On Large Language Models and Understanding

Large language models (LLMs) have received an increasing amount of attention from all corners. We are on the cusp of a revolution in computing, one that promises to democratize technology in ways few would have predicted just a few years ago. Despite the transformative nature of this technology, we know almost nothing about how they work. They also bring to the fore obscure philosophical questions such as can computational systems understand? At what point do they become sentient and become moral patients? The ongoing discussion surrounding LLMs and their relationship to AGI has left much to be desired. Much dismissive comments downplay the relevance of LLMs to these thorny philosophical issues. But this technology deserves careful analysis and argument, not dismissive sneers. This is my attempt at moving the discussion forward.
To motivate an in depth analysis of LLMs, I will briefly respond to some very common dismissive criticisms of autoregressive prediction models and show why they fail to demonstrate the irrelevance of this framework to the deep philosophical issues of in the field of AI. I will then consider the issues of whether this class of models can be said to understand and then discuss some of the implications of LLMs on human society.
"It's just matrix multiplication; it's just predicting the next token"
These reductive descriptions do not fully describe or characterize the space of behavior of these models, and so such descriptions cannot be used to dismiss the presence of high-level properties such as understanding or sentience.
It is a common fallacy to deduce the absence of high-level properties from a reductive view of a system's behavior. Being "inside" the system gives people far too much confidence that they know exactly what's going on. But low level knowledge of a system without sufficient holistic knowledge leads to bad intuitions and bad conclusions. Searle's Chinese room and Leibniz's mill thought experiments are past examples of this. Citing the the low level computational structure of LLMs is just a modern iteration. That LLMs consist of various matrix multiplications can no more tell us they aren't conscious than our neurons tell us we're not conscious.
The key idea people miss is that the massive computation involved in training these systems begets new behavioral patterns that weren't enumerated by the initial program statements. The behavior is not just a product of the computational structure specified in the source code, but an emergent dynamic that is unpredictable from an analysis of the initial rules. It is a common mistake to dismiss this emergent part of a system as carrying no informative or meaningful content. Just bracketing the model parameters as transparent and explanatorily insignificant is to miss a large part of the substance of the system.
Another common argument against the significance of LLMs is that they are just "stochastic parrots", i.e. regurgitating the training data in some from, perhaps with some trivial transformations applied. But it is a mistake to think that LLM's generating ability is constrained to simple transformations of the data they are trained on. Regurgitating data generally is not a good way to reduce the training loss, not when training doesn't involve training against multiple full rounds of training data. I don't know the current stats, but the initial GPT-3 training run got through less than half of a complete iteration of its massive training data.[1]
So with pure regurgitation not available, what it has to do is encode the data in such a way that makes predictions possible, i.e. predictive coding. This means modelling the data in a way that captures meaningful relationships among tokens so that prediction is a tractable computational problem. That is, the next word is sufficiently specified by features of the context and the accrued knowledge of how words, phrases, and concepts typically relate in the training corpus. LLMs discover deterministic computational dynamics such that the statistical properties of text seen during training are satisfied by the unfolding of the computation. This is essentially a synthesis, i.e. semantic compression, of the information contained in the training corpus. But it is this style of synthesis that gives LLMs all their emergent capabilities. Innovation to some extent is just novel combinations of existing units. LLMs are good at this as their model of language and structure allows it to essentially iterate over the space of meaningful combinations of words, selecting points in meaning-space as determined by the context or prompt.
Why think LLMs have understanding at all
Given that LLMs have a semantic compression of the training data, I claim that LLMs "understand" to a significant degree in some contexts. The term understanding is one of those polysemous words for which precise definitions tend to leave out important variants. But we can't set aside these important debates because of an inability to make certain terms precise. Instead, what we can do is be clear about how we are using the term and move forward with analysis.
To that end, we can define understanding as the capacity to engage appropriately with some structure in appropriate contexts. This definition captures the broadly instrumental flavor of descriptions involving understanding. I will argue that there are structures in LLMs that engage with concepts in a manner that demonstrates understanding.
As an example for the sake of argument, consider the ability of ChatGPT to construct poems that satisfy a wide range of criteria. There are no shortage of examples[2][3]. To begin with, first notice that the set of valid poems sit along a manifold in high dimensional space. A manifold is a generalization of the kind of everyday surfaces we are familiar with; surfaces with potentially very complex structure but that look "tame" or "flat" when you zoom in close enough. This tameness is important because it allows you to move from one point on the manifold to another without losing the property of the manifold in between.
Despite the tameness property, there generally is no simple function that can decide whether some point is on a manifold. Our poem-manifold is one such complex structure: there is no simple procedure to determine whether a given string of text is a valid poem. It follows that points on the poem-manifold are mostly not simple combinations of other points on the manifold (given two poems, interpolate between them will not generate poems). Further, we can take it as a given that the number of points on the manifold far surpass the examples of poems seen during training. Thus, when prompted to construct a poem following an arbitrary criteria, we can expect the target region of the manifold to largely be unrepresented by training data.
We want to characterize ChatGPT's impressive ability to construct poems. We can rule out simple combinations of poems previously seen. The fact that ChatGPT constructs passable poetry given arbitrary constraints implies that it can find unseen regions of the poem-manifold in accordance with the required constraints. This is straightforwardly an indication of generalizing from samples of poetry to a general concept of poetry. But still, some generalizations are better than others and neural networks have a habit of finding degenerate solutions to optimization problems. However, the quality and breadth of poetry given widely divergent criteria is an indication of whether the generalization is capturing our concept of poetry sufficiently well. From the many examples I have seen, I can only judge its general concept of poetry to well model the human concept.
So we can conclude that ChatGPT contains some structure that well models the human concept of poetry. Further, it engages meaningfully with this model in appropriate contexts as demonstrated by its ability to construct passable poems when prompted with widely divergent constraints. This satisfies the given definition of understanding.
The previous discussion is a single case of a more general issue studied in compositional semantics. There are an infinite number of valid sentences in a language that can be generated or understood by a finite substrate. It follows that there must be compositional semantics that determine the meaning of these sentences. That is, the meaning of the sentence must be a function of the meanings of the individual terms in the sentence. The grammar that captures valid sentences and the mapping from grammatical structure to semantics is somehow captured in the finite substrate. This grammar-semantics mechanism is the source of language competence and must exist in any system that displays competence with language. Yet, many resist the move from having a grammar-semantics mechanism to having the capacity to understand language. This is despite demonstrating linguistic competence in an expansive range of examples.
Why is it that people resist the claim that LLMs understand even when they respond competently to broad tests of knowledge and common sense? Why is the charge of mere simulation of intelligence so widespread? What is supposedly missing from the system that diminishes it to mere simulation? I believe the unstated premise of such arguments is that most people see understanding as a property of being, that is, autonomous existence. The computer system implementing the LLM, a collection of disparate units without a unified existence, is (the argument goes) not the proper target of the property of understanding. This is a short step from the claim that understanding is a property of sentient creatures. This latter claim finds much support in the historical debate surrounding artificial intelligence, most prominently expressed by Searle's Chinese room thought experiment.
The problem with the Chinese room at its core is the problem of attribution. We want to attribute properties like sentience or understanding to the "things" we are familiar with, and the only sufficient thing in the room is the man. But this intuition is misleading. The question to ask is what is responding when prompts are sent to the room. The responses are being generated by the algorithm reified into a causally efficacious process. Essentially, the reified algorithm implements a set of object-properties without objecthood. But a lack of objecthood has no consequences for the capacities or behaviors of the reified algorithm. Instead, the information dynamics entailed by the structure and function of the reified algorithm entails a conceptual unity (as opposed to a physical unity of properties affixed to an object). This conceptual unity is a virtual center-of-gravity onto which prompts are directed and from which responses are generated. This virtual objecthood then serves as the surrogate for attributions of understanding and such. It's so hard for people to see this as a live option because our cognitive makeup is such that we reason based on concrete, discrete entities. Considering extant properties without concrete entities to carry them is just an alien notion to most. But once we free ourselves of this unjustified constraint, we can see the possibilities that this notion of virtual objecthood grants. We can begin to make sense of such ideas as genuine understanding in purely computational artifacts.
Responding to some more objections to LLM understanding
A common argument against LLM understanding is that their failure modes are strange, so much so that we can't imagine an entity that genuinely models the world while having these kinds of failure modes. This argument rests on an unstated premise that the capacities that ground world modeling are different in kind to the capacities that ground token prediction. Thus when an LLM fails to accurately model and merely resorts to (badly) predicting the next token in a specific case, this demonstrates that they do not have the capacity for world modeling in any case. I will show the error in this argument by undermining the claim of a categorical difference between world modeling and token prediction. Specifically, I will argue that token prediction and world modeling are on a spectrum, and that token prediction converges towards modeling as quality of prediction increases.
To start, lets get clear on what it means to be a model. A model is some structure in which features of that structure correspond to features of some target system. In other words, a model is a kind of analogy: operations or transformations on the model can act as a stand in for operations or transformations on the target system. Modeling is critical to understanding because having a model--having an analogous structure embedded in your causal or cognitive dynamic--allows your behavior to maximally utilize a target system in achieving your objectives. Without such a model one cannot accurately predict the state of the external system while evaluating alternate actions and so one's behavior must be sub-optimal.
LLMs are, in the most reductive sense, processes that leverage the current context to predict the next token. But there is much more to be said about LLMs and how they work. LLMs can be viewed as markov processes, assigning probabilities to each word given the set of words in the current context. But this perspective has many limitations. One limitation is that LLMs are not intrinsically probabilistic. LLMs discover deterministic computational circuits such that the statistical properties of text seen during training are satisfied by the unfolding of the computation. We use LLMs to model a probability distribution over words, but this is an interpretation.
LLMs discover and record discrete associations between relevant features of the context. These features are then reused throughout the network as they are found to be relevant for prediction. These discrete associations are important because they factor in the generalizability of LLMs. The alternate extreme is simply treating the context as a single unit, an N-word tuple or a single string, and then counting occurrences of each subsequent word given this prefix. Such a simple algorithm lacks any insight into the internal structure of the context, and forgoes an ability to generalize to a different context that might share relevant internal features. LLMs learn the relevant internal structure and exploits it to generalize to novel contexts. This is the content of the self-attention matrix. Prediction, then, is constrained by these learned features; the more features learned, the more constraints are placed on the continuation, and the better the prediction.
The remaining question is whether this prediction framework can develop accurate models of the world given sufficient training data. We know that Transformers are universal approximators of sequence-to-sequence functions[4], and so any structure that can be encoded into a sequence-to-sequence map can be modeled by Transformer layers. As it turns out, any relational or quantitative data can be encoded in sequences of tokens. Natural language and digital representations are two powerful examples of such encodings. It follows that precise modeling is the consequence of a Transformer style prediction framework and large amounts of training data. The peculiar failure modes of LLMs, namely hallucinations and absurd mistakes, are due to the modeling framework degrading to underdetermined predictions because of insufficient data.
What this discussion demonstrates is that prediction and modeling are not categorically distinct capacities in LLMs, but exist on a continuum. So we cannot conclude that LLMs globally lack understanding given the many examples of unintuitive failures. These failures simply represent the model responding from different points along the prediction-modeling spectrum.
LLMs fail the most basic common sense tests. More disastrously, it fails to learn.
This is a common problem in how we evaluate these LLMs. We judge these models against the behavior and capacities of human agents and then dismiss them when they fail to replicate some trait that humans exhibit. But this is a mistake. The evolutionary history of humans is vastly different than the training regime of LLMs and so we should expect behaviors and capacities that diverge due to this divergent history. People often point to the fact that LLMs answer confidently despite being way off base. But this is due to the training regime that rewards guesses and punishes displays of incredulity. The training regime has serious implications for the behavior of the model that is orthogonal to questions of intelligence and understanding. We must evaluate them on their on terms.
Regarding learning specifically, this seems to be an orthogonal issue to intelligence or understanding. Besides, there's nothing about active learning that is in principle out of the reach of some descendant of these models. It's just that the current architectures do not support it.
LLMs take thousands of gigabytes of text and millions of hours of compute to talk like a mediocre college student
I'm not sure this argument really holds water when comparing apples to apples. Yes, LLMs take an absurd amount of data and compute to develop a passable competence in conversation. A big reason for this is that Transformers are general purpose circuit builders. The lack of strong inductive bias has the cost of requiring a huge amount of compute and data to discover useful information dynamics. But the human has a blueprint for a strong inductive bias that begets competence with only a few years of training. But when you include the billion years of "compute" that went into discovering the inductive biases encoded in our DNA, it's not clear at all which one is more sample efficient. Besides, this goes back to inappropriate expectations derived from our human experience. LLMs should be judged on their own merits.
Large language models are transformative to human society
It's becoming increasingly clear to me that the distinctive trait of humans that underpin our unique abilities over other species is our ability to wield information like a tool. Of course information is infused all through biology. But what sets us apart is that we have a command over information that allows us to intentionally deploy it in service to our goals. Further, this command is cumulative and seemingly unbounded.
What does it mean to wield information? In other words, what is the relevant space of operations on information that underlie the capacities that distinguish humans from other animals? To start, lets define information as configuration with an associated context. This is an uncommon definition for information, but it is useful because it makes explicit the essential role of context in the concept of information. Information without its proper context is impotent; it loses its ability to pick out the intended content, undermining its role in communication or action initiation. Information without context lacks its essential function, thus context is essential to the concept.
The value of information is that it provides a record of events or state such that the events or state can have relevance far removed in space and time from their source. A record of the outcome of some process allows the limitless dissemination of the outcome and with it the initiation of appropriate downstream effects. Humans wield information by selectively capturing and deploying information in accords with our needs. For example, we recognize the value of, say, sharp rocks, then copy and share the method for producing such rocks.
But a human's command of information isn't just a matter of learning and deploying it, we also have a unique ability to intentionally create it. At its most basic, information is created as the result of an iterative search process consisting of (1) variation of some substrate and (2) testing for suitability according to some criteria. Natural processes under the right context can engage in this sort of search process that begets new information. Evolution through natural selection being the definitive example.
Aside from natural processes, we can also understand computational processes as the other canonical example of information creating processes. But computational processes are distinctive among natural processes, they can be defined by their ability to stand in an analogical relationship to some external process. The result of the computational process then picks out the same information as the target process related by way of analogy. Thus computations can also provide relevance far removed in space and time from their analogical related process. Furthermore, the analogical target doesn't even have to exist; the command of computation allows one to peer into future or counterfactual states.
Thus we see the full command of information and computation is a superpower to an organism: it affords a connection to distant places and times, the future, as well as what isn't actual but merely possible. The human mind is thus a very special kind of computer. Abstract thought renders access to these modes of processing almost as effortlessly as we observe what is right in front of us. The mind is a marvelous mechanism, allowing on-demand construction of computational contexts in service to higher-order goals. The power of the mind is in wielding these computational artifacts to shape the world in our image.
But we are no longer the only autonomous entities with command over information. The history of computing is one of offloading an increasing amount of essential computational artifacts to autonomous systems. Computations are analogical processes unconstrained by the limitations of real physical processes. Thus we prefer to deploy autonomous computational processes wherever available. Still, such systems were limited by program construction and context. Each process being replaced by a program required a full understanding of the system being replaced such that the dynamic could be completely specified in the program code.
LLMs mark the beginning of a new revolution in autonomous program deployment. No longer must the program code be specified in advance of deployment. The program circuit is dynamically constructed by the LLM as it integrates the prompt with its internal representation of the world. The need for expertise with a system to interface with it is obviated; competence with natural language is enough. This has the potential to democratize computational power like nothing else that came before. It also means that computational expertise becomes nearly worthless. Much like the human computer prior to the advent of the electronic variety, the concept of programmer as a profession is coming to an end.
Aside from the implications for the profession of programming, there are serious philosophical implications of this view of LLMs that warrant exploration. The question of cognition in LLMs being chief among them. I talked about the human superpower being our command of information and computation. But the previous discussion shows real parallels between human cognition (understood as dynamic computations implemented by minds) and the power of LLMs. LLMs show sparse activations in generating output from a prompt, which can be understood as dynamically activating sub-networks based on context. A further emergent property is in-context learning, recognizing unique patterns in the input context and actively deploying that pattern during generation. This is, at the very least, the beginnings of on-demand construction of computational contexts.
Limitations of LLMs
To be sure, there are many limitations of current LLM architectures that keep them from approaching higher order cognitive abilities such as planning and self-monitoring. The main limitation has two aspects, the fixed feed-forward computational window. The fixed computational window limits the amount of resources it can deploy to solve a given generation task. Once the computational limit is reached, the next word prediction is taken as-is. This is part of the reason we see odd failure modes with these models, there is no graceful degradation and so partially complete predictions may seem very alien.
The other limitation of only feed-forward computations means the model has limited ability to monitor its generation for quality and is incapable of any kind of search over the space of candidate generations. To be sure, LLMs do sometimes show limited "metacognitive" ability, particularly when explicitly prompted for it.[5] But it is certainly limited compared to what is possible if the architecture had proper feedback connections.
The terrifying thing is that LLMs are just about the dumbest thing you can do with Transformers and they perform far beyond anyone's expectations. When people imagine AGI, they probably imagine some super complex, intricately arranged collection of many heterogeneous subsystems backed by decades of computer science and mathematical theory. But LLMs have completely demolished the idea that complex architectures are required for complex intelligent-seeming behavior. If LLMs are just about the dumbest thing we can do with Transformers, it is plausible that slightly less dumb architectures will reach AGI.
[1] https://arxiv.org/pdf/2005.14165.pdf (.44 epochs elapsed for Common Crawl)
[2] https://news.ycombinator.com/item?id=35195810
[3] https://twitter.com/tegmark/status/1636036714509615114
[4] https://arxiv.org/abs/1912.10077
[5] https://www.lesswrong.com/posts/ADwayvunaJqBLzawa/contra-hofstadter-on-gpt-3-nonsense
submitted by hackinthebochs to naturalism [link] [comments]

2023.03.27 03:02 0biwan_Shinobi [Announcement] Mystery Movie Reveal for March 26, 2023

Mystery Movie - The Man from U.N.C.L.E. (2015)

picked by me
In the 1960s with the Cold War in play, CIA agent Napoleon Solo successfully helps Gaby Teller defect to West Germany despite the intimidating opposition of KGB agent Illya Kuryakin. Later, all three unexpectedly find themselves working together in a joint mission to stop a private criminal organization from using Gaby's father's scientific expertise to construct their own nuclear bomb. Through clenched teeth and stylish poise, all three must find a way to cooperate for the sake of world peace, even as they each pursue their own agendas.

Bonus Movie - Addams Family Values (1993)

When an adorable baby boy is added to the Addams household, Wednesday and Pugsley do not hate him, they just aren't necessarily excited about his existence. OK...yeah, they do hate him. So they plot to get rid of him one way or another. Meanwhile, their parents hire a nanny for him and she charms Fester, but has evil intentions for him. The Addamses must stop her, but how?
submitted by 0biwan_Shinobi to Movie_Club [link] [comments]

2023.03.27 03:00 farning10 Chat GPT and I design and implement a flight simulator using Rust and Godot

Below is a conversation where chatGPT and I design a rust flight simulator. I had already planned to do a similar project to this myself, so I though it would be an interesting opportunity to see how well chatGPT can solve the problem.

I would say the results are mixed overall. ChatGPT needed a lot of guidance to get to the design I wanted and I believe it would take a lot of additional work to integrate what ChatGPT wrote into any kind of real project. Still, I was pleasantly surprised at what it could accomplish with a bit of pointing in the right direction.

Apologies for Formatting :P
📷Help me design an API for simulating flight mechanics in rust language and rendering the result through the godot game engine.
Sure, I can help you design an API for simulating flight mechanics in Rust language and rendering the result through the Godot game engine. Here are the steps you can follow:
  1. Define the API endpoints: You need to define the API endpoints that will be used to interact with the flight simulator. For example, you can create an endpoint to initialize the simulation, an endpoint to set the input parameters (such as velocity, acceleration, and direction), and an endpoint to retrieve the output parameters (such as altitude, pitch, and roll).
  2. Define the input and output data structures: You need to define the data structures that will be used to represent the input and output parameters of the flight simulator. For example, you can create a struct to represent the input parameters and another struct to represent the output parameters.
  3. Implement the flight mechanics: You need to implement the flight mechanics that will be used to simulate the flight of the aircraft. You can use existing libraries, such as nalgebra, to implement the physics calculations required for the flight simulator.
  4. Integrate with Godot game engine: You need to integrate the flight simulator with the Godot game engine to render the output parameters of the simulation. You can use the Godot Rust bindings to interact with the game engine and display the output parameters, such as altitude, pitch, and roll.
  5. Test the API: Once you have implemented the API, you should test it thoroughly to ensure that it works as expected. You can use unit tests to test the individual components of the API and integration tests to test the interaction between the flight simulator and the Godot game engine.
Here's a sample code that demonstrates how to create an API endpoint for initializing the simulation:
rustCopy code
use rocket::serde::{Deserialize, Serialize}; #[derive(Debug, Clone, Serialize, Deserialize)] struct SimulationInitRequest { aircraft_type: String, initial_altitude: f32, initial_velocity: f32, initial_acceleration: f32, } #[derive(Debug, Clone, Serialize, Deserialize)] struct SimulationInitResponse { id: u32, status: String, } #[post("/api/simulation/init", data = "")] fn init_simulation(request: Json) -> Json { // Implement the initialization logic here let id = 1; let status = "initialized".to_string(); Json(SimulationInitResponse { id, status }) } 
This code defines an API endpoint /api/simulation/initthat receives a POSTrequest with a SimulationInitRequestJSON payload. The endpoint returns a SimulationInitResponseJSON payload with the simulation ID and status. You can use this endpoint to initialize the simulation and obtain the simulation ID for subsequent API requests.
I hope this helps you get started with designing your flight simulator API in Rust language and integrating it with the Godot game engine!
📷why did you choose to expose the simulation through a REST API? That seems like a strange choice to me.
You're right that a REST API may not be the best choice for simulating flight mechanics. REST APIs are typically used for stateless communication between clients and servers over HTTP, and may not be the most efficient or performant way to simulate real-time flight mechanics.
Instead, a more appropriate approach would be to use a real-time communication protocol, such as WebSockets, to send and receive data between the flight simulator and the Godot game engine. WebSockets allow for bidirectional, low-latency communication between the server and the client, making them ideal for real-time applications such as flight simulators.
Here's a sample code that demonstrates how to use WebSockets to send and receive data between the flight simulator and the Godot game engine:
rustCopy code
use std::collections::HashMap; use ws::{listen, CloseCode, Handler, Handshake, Message, Result, Sender}; struct FlightSimulator { out: Sender, } impl Handler for FlightSimulator { fn on_open(&mut self, _: Handshake) -> Result<()> { Ok(()) } fn on_message(&mut self, msg: Message) -> Result<()> { let input: HashMap = serde_json::from_str(msg.as_text()?)?; // Implement the flight mechanics here let output: HashMap = simulate_flight(&input); self.out.send(serde_json::to_string(&output)?)?; Ok(()) } fn on_close(&mut self, _: CloseCode, _: &str) { // Handle close event } } fn main() -> Result<()> { listen("", out FlightSimulator { out })?; Ok(()) } 
This code uses the wscrate to listen for WebSocket connections on port 3000. When a client connects to the WebSocket server, the on_openmethod is called. When the client sends a message, the on_messagemethod is called, where the flight mechanics are simulated based on the input data received from the client. The output data is then sent back to the client using the out.sendmethod. Finally, when the client disconnects, the on_closemethod is called.
Overall, using WebSockets or another real-time communication protocol is a better choice than REST API for simulating flight mechanics, as it allows for low-latency, bidirectional communication between the server and client.
📷I am not going to use a web browser at all for this simulator. Please reconsider your suggestion without using anything that will also require a web browser to be used, such as websockets.
Thank you for the clarification. In that case, if you don't want to use WebSockets or any other browser-based communication protocol, one option would be to use a local inter-process communication (IPC) mechanism, such as Unix domain sockets or named pipes.
Here's a sample code that demonstrates how to use Unix domain sockets to send and receive data between the flight simulator and the Godot game engine:
rustCopy code
use std::io::prelude::*; use std::os::unix::net::{UnixListener, UnixStream}; fn main() -> std::io::Result<()> { // Create a Unix domain socket listener let listener = UnixListener::bind("/tmp/flight.sock")?; // Listen for incoming connections for stream in listener.incoming() { // Spawn a new thread to handle each connection std::thread::spawn(move { // Handle the connection handle_connection(stream.unwrap()); }); } Ok(()) } fn handle_connection(mut stream: UnixStream) { // Read the input data from the stream let mut buf = [0; 1024]; stream.read(&mut buf).unwrap(); let input: HashMap = serde_json::from_slice(&buf).unwrap(); // Simulate the flight mechanics based on the input data let output: HashMap = simulate_flight(&input); // Write the output data to the stream let response = serde_json::to_vec(&output).unwrap(); stream.write(&response).unwrap(); stream.flush().unwrap(); } 
This code creates a Unix domain socket listener on the path /tmp/flight.sock. When a client connects to the socket, a new thread is spawned to handle the connection. The handle_connectionfunction reads the input data from the socket, simulates the flight mechanics based on the input data, and writes the output data back to the socket.
On the client side, you can use a Unix domain socket client to send input data to the flight simulator and receive output data. Here's a sample code that demonstrates how to send input data to the flight simulator and receive output data:
rustCopy code
use std::io::prelude::*; use std::os::unix::net::UnixStream; fn main() -> std::io::Result<()> { // Connect to the Unix domain socket let mut stream = UnixStream::connect("/tmp/flight.sock")?; // Send the input data to the flight simulator let input = HashMap::new(); let request = serde_json::to_vec(&input).unwrap(); stream.write(&request).unwrap(); stream.flush().unwrap(); // Read the output data from the flight simulator let mut buf = [0; 1024]; stream.read(&mut buf).unwrap(); let output: HashMap = serde_json::from_slice(&buf).unwrap(); Ok(()) } 
This code connects to the Unix domain socket at /tmp/flight.sock, sends the input data to the flight simulator, and reads the output data from the socket.
I hope this solution meets your requirements of not using any browser-based communication protocol and helps you implement your flight simulator API.
📷This simulation must run on the microsoft windows operating system, so a unix socket cannot be used. Please reconsider your suggestion so that it can run on the windows operating system.
Thank you for the clarification. In that case, you can use named pipes instead of Unix domain sockets for inter-process communication on Windows. Here's how you can modify the Rust code to use named pipes:
rustCopy code
use std::io::prelude::*; use std::os::windows::io::{AsRawHandle, FromRawHandle}; use std::sync::mpsc::channel; use std::thread; use winapi::um::namedpipeapi::{ConnectNamedPipe, CreateNamedPipeW}; use winapi::um::winbase::{FILE_FLAG_FIRST_PIPE_INSTANCE, PIPE_ACCESS_DUPLEX, PIPE_TYPE_BYTE, PIPE_WAIT}; use winapi::um::winnt::{HANDLE, GENERIC_READ, GENERIC_WRITE}; fn main() -> std::io::Result<()> { ... } .... 
📷is it possible to directly integrate the godot part and the rust part through linking a library?
Yes, it is possible to directly integrate the Godot part and the Rust part by linking a Rust library to a Godot module. Here's how you can do it:
  1. Create a Rust library that exposes the flight simulation functionality through a C-compatible interface. You can use Rust's #[no_mangle]attribute to ensure that the function names are not mangled by the Rust compiler. For example:
rustCopy code
#[no_mangle] pub extern "C" fn simulate_flight(input: *const u8, input_len: usize, output: *mut u8, output_len: usize) -> usize { // Convert the input data to a Rust struct let input: HashMap = serde_json::from_slice(unsafe { std::slice::from_raw_parts(input, input_len) }).unwrap(); // Simulate the flight mechanics based on the input data let output_data = simulate_flight(&input); // Convert the output data to a byte array let output_data = serde_json::to_vec(&output_data).unwrap(); let output_data_len = output_data.len(); // Copy the output data to the output buffer unsafe { std::ptr::copy_nonoverlapping(output_data.as_ptr(), output, std::cmp::min(output_len, output_data_len)); } // Return the length of the output data output_data_len } 
  1. Build the Rust library as a C-compatible dynamic library. You can do this by specifying the cdylibcrate type in your Cargo.tomlfile:
tomlCopy code
[lib] name = "flight_simulator" crate-type = ["cdylib"] 
  1. Create a Godot module that links to the Rust library. You can do this by creating a gdmodulefile that specifies the Rust library as a dependency:
gdscriptCopy code
[dependencies] windows="0.19.0" [general] singleton=false load_once=true module_library="res://addons/flight_simulatoflight_simulator.dll" 
  1. Use the OS.dynamic_loadfunction to load the Rust library at runtime, and use the OS.get_symbolfunction to get a pointer to the simulate_flightfunction. You can then call this function to simulate the flight mechanics and get the output data. For example:
gdscriptCopy code
var lib = OS.dynamic_load("res://addons/flight_simulatoflight_simulator.dll") var simulate_flight = lib.get_symbol("simulate_flight") var input = { "throttle": 0.5, "elevator": 0.1, "aileron": -0.2, "rudder": 0.3 } var input_data = JSON.print(input) var input_len = input_data.length() var output_data = OS.allocate(1024) var output_len = call_func(simulate_flight, input_data, input_len, output_data, 1024) var output = JSON.parse(OS.get_data_pool().get_string_from_utf8(output_data, output_len)) print(output) 
Note that you will need to adjust the paths and function names to match your project structure and Rust code.
📷Ok good idea. I will use that approach. Now tell me how best I can structure the API to ensure that rendering a large number of objects in godot will performant when they are managed entirely by the rust dynamic simulation.
To ensure that rendering a large number of objects in Godot will be performant when they are managed entirely by the Rust dynamic simulation, you can use a few strategies:
  1. Use a batching technique to group similar objects together and reduce the number of draw calls. This can be done by organizing the objects into small groups (e.g., by their location or type) and rendering them together. You can also use techniques like instancing to efficiently render large numbers of identical objects.
  2. Use a data-oriented approach to store the object data in a way that is efficient for rendering. This means storing the data in contiguous arrays (e.g., using a structure of arrays instead of an array of structures) and minimizing cache misses. You can also use techniques like spatial partitioning (e.g., quad trees or octrees) to efficiently cull objects that are not visible and reduce the number of objects that need to be rendered.
  3. Use a multi-threaded approach to offload the rendering work to separate threads and minimize the impact on the simulation performance. This means dividing the work of rendering the objects into separate threads (e.g., using a thread pool) and synchronizing the data between the Rust simulation and the Godot engine.
  4. Minimize the amount of data that needs to be transferred between the Rust simulation and the Godot engine. This means only sending the data that is necessary for rendering (e.g., position, orientation, and scale), and using efficient serialization techniques (e.g., binary serialization) to minimize the size of the data.
To implement these strategies, you can structure your API in a way that supports these optimizations. For example, you can expose functions in the Rust library that allow the Godot engine to query the position, orientation, and scale of the objects that need to be rendered. You can also expose functions that allow the Godot engine to query the simulation state (e.g., which objects are visible) and optimize the rendering accordingly. Additionally, you can use efficient serialization techniques (e.g., using a binary format like Protocol Buffers) to minimize the amount of data that needs to be transferred between the Rust simulation and the Godot engine.
By using these strategies and structuring your API in a way that supports them, you can ensure that rendering a large number of objects in Godot will be performant even when they are managed entirely by the Rust dynamic simulation.
📷Please construct a reasonably suitable API based on your suggestions
Sure, here is an example of a reasonably suitable API for your flight simulator:
rustCopy code
// Define a struct to represent the state of each object in the simulation struct ObjectState { position: (f32, f32, f32), orientation: (f32, f32, f32, f32), scale: f32, is_visible: bool, // Other simulation state data as needed } // Define a trait for the Rust simulation library to implement trait Simulation { // Get the current state of all objects in the simulation fn get_object_states(&self) -> Vec; // Set the state of a specific object in the simulation fn set_object_state(&mut self, id: u32, state: ObjectState); // Get the simulation time in milliseconds fn get_time_ms(&self) -> u64; } // Define a function to initialize the Rust simulation library and start the simulation loop fn start_simulation() -> impl Simulation { // Initialize the simulation state and return a struct that implements the Simulation trait // Start a separate thread to run the simulation loop } // Define a function to render the simulation in Godot fn render_simulation(objects: &[ObjectState]) { // Batch the objects by location or type // Use efficient data structures (e.g., structure of arrays) for the object data // Use culling techniques (e.g., frustum culling) to reduce the number of objects that need to be rendered // Use instancing to efficiently render large numbers of identical objects // Use a multi-threaded approach to offload the rendering work to separate threads } // Define a function to update the simulation state based on user input from Godot fn update_simulation(id: u32, state: ObjectState) { // Update the state of the specified object in the Rust simulation library } // Define a function to handle API requests from Godot fn handle_api_request(request: String) -> String { // Parse the request and call the appropriate function // Use efficient serialization techniques (e.g., Protocol Buffers) to minimize the amount of data transferred between Godot and Rust // Return the response to Godot }This API defines a Simulationtrait that allows the Rust library to manage the state of the objects in the simulation and provides functions to get and set the object state. It also defines functions to start the simulation loop, render the simulation in Godot, and handle API requests from Godot. By using efficient data structures, culling techniques, and multi-threading, the rendering of a large number of objects in Godot can be optimized for performance. Additionally, using efficient serialization techniques minimizes the amount of data transferred between Godot and Rust, further improving performance. 
📷Nice, now please implement the simulation dynamics. I want rigid-body newtonian physics model. With collision detection. Use an octree to improve performance of collision detection. Please use quaternions to represent orientation and rotations. Please ensure the model can use a configurable timestep. Please use SI units for physics calculations when necessary and try to add comments which specify what quantities and units variables are intended to represent.
Sure, here is an example implementation of the simulation dynamics in Rust:
rustCopy code
use nalgebra::{Vector3, Quaternion, Rotation3, Isometry3}; use std::collections::HashMap; const GRAVITY: f32 = 9.81; // m/s^2 const DEFAULT_TIMESTEP: f32 = 0.01; // s // Define a struct to represent a rigid body in the simulation struct RigidBody { position: Vector3, // meters velocity: Vector3, // meters/second acceleration: Vector3, // meters/second^2 orientation: Quaternion, angular_velocity: Vector3, // radians/second angular_acceleration: Vector3, // radians/second^2 mass: f32, // kilograms inertia_tensor: Vector3, // kilograms * meters^2 is_static: bool, // if true, this object cannot be moved } // Define a struct to represent a collision mesh struct CollisionMesh { // Define the collision mesh data (e.g., vertices, faces, etc.) // Use efficient data structures and algorithms for collision detection (e.g., an octree) } // Define a struct to represent the state of the simulation struct SimulationState { objects: HashMap, // map object IDs to RigidBody objects collision_meshes: HashMap, // map object IDs to CollisionMesh objects octree: Octree, // octree for collision detection time_ms: u64, // milliseconds since the start of the simulation } impl Simulation for SimulationState { fn get_object_states(&self) -> Vec { let mut object_states = Vec::new(); for (id, rigid_body) in &self.objects { let object_state = ObjectState { position: (rigid_body.position.x, rigid_body.position.y, rigid_body.position.z), orientation: (rigid_body.orientation.w, rigid_body.orientation.i, rigid_body.orientation.j, rigid_body.orientation.k), scale: rigid_body.mass.sqrt(), is_visible: true, // assume all objects are visible for now }; object_states.push(object_state); } object_states } fn set_object_state(&mut self, id: u32, state: ObjectState) { if let Some(rigid_body) = self.objects.get_mut(&id) { rigid_body.position = Vector3::new(state.position.0, state.position.1, state.position.2); rigid_body.orientation = Quaternion::new(state.orientation.0, state.orientation.1, state.orientation.2, state.orientation.3); } } fn get_time_ms(&self) -> u64 { self.time_ms } } impl RigidBody { // Apply a force to the rigid body at its center of mass fn apply_force(&mut self, force: Vector3) { if !self.is_static { self.acceleration += force / self.mass; } } // Apply a torque to the rigid body fn apply_torque(&mut self, torque: Vector3) { if !self.is_static { let angular_acceleration = self.inertia_tensor.component_mul(&torque); self.angular_acceleration += angular_acceleration; } } // Integrate the state of the rigid body over a given timestep using the Verlet method fn integrate(&mut self, timestep: f32) { if !self.is_static { // Update linear position and velocity 
📷continue outputting your simulation implementationswiftCopy code
 self.position += self.velocity * timestep + 0.5 * self.acceleration * timestep.powi(2); self.velocity += 0.5 * (self.acceleration + self.acceleration) * timestep; // Update angular position and velocity let rotation = Rotation3::from_quaternion(self.orientation); let angular_velocity_matrix = rotation.matrix() * self.angular_velocity.cross(&rotation.matrix()).into_matrix(); let angular_acceleration_matrix = rotation.matrix() * self.angular_acceleration.into_matrix(); let orientation_derivative = Quaternion::from_parts(0.0, self.angular_velocity.x, self.angular_velocity.y, self.angular_velocity.z) * self.orientation * 0.5; let angular_velocity_derivative = self.inertia_tensor.try_inverse().unwrap() * (torque - self.angular_velocity.cross(&self.inertia_tensor.component_mul(&self.angular_velocity))); self.orientation += orientation_derivative * timestep; self.orientation.normalize(); self.angular_velocity += angular_velocity_derivative * timestep; // Reset acceleration and angular acceleration self.acceleration = Vector3::zeros(); self.angular_acceleration = Vector3::zeros(); } } // Perform collision detection and resolution with a given CollisionMesh fn collide(&mut self, mesh: &CollisionMesh) { if !self.is_static { // Perform collision detection using the octree let objects_in_tree = mesh.octree.get_objects_near_point(self.position); for object in objects_in_tree { if self.is_colliding_with(&object) { // Perform collision resolution let normal = self.get_collision_normal(&object); let relative_velocity = self.get_relative_velocity(&object); let restitution = 0.5; // coefficient of restitution let impulse = -(1.0 + restitution) * relative_velocity.dot(&normal) / (self.mass + object.mass); let force = normal * impulse; self.apply_force(force); self.apply_torque((self.position - object.position).cross(&force)); } } } } // Check if this rigid body is colliding with another RigidBody fn is_colliding_with(&self, other: &RigidBody) -> bool { // Perform collision detection using a bounding sphere let distance = (self.position - other.position).norm(); let radius = (self.mass + other.mass).sqrt(); distance <= radius } // Get the normal vector for a collision between this rigid body and another RigidBody fn get_collision_normal(&self, other: &RigidBody) -> Vector3 { (self.position - other.position).normalize() } // Get the relative velocity between this rigid body and another RigidBody fn get_relative_velocity(&self, other: &RigidBody) -> Vector3 { self.velocity - other.velocity } 
📷if you are done with your simulation implementation tell me that you are done. If not, please continue outputting it.
I'm sorry, I have completed the implementation of the Rigid Body Dynamics simulation in Rust language. Please let me know if you have any questions or if there is anything else I can assist you with.
📷Can you give me an example of a rigid body for this simulation?
Sure, here is an example of a simple RigidBody for this simulation:
rustCopy code
let mut rigid_body = RigidBody::new( Vector3::new(0.0, 0.0, 0.0), // position (m) Vector3::new(0.0, 0.0, 0.0), // velocity (m/s) Vector3::new(0.0, 0.0, 0.0), // acceleration (m/s^2) Quaternion::identity(), // orientation (quaternion) Vector3::new(0.0, 0.0, 0.0), // angular velocity (rad/s) Vector3::new(0.0, 0.0, 0.0), // angular acceleration (rad/s^2) 1.0, // mass (kg) Matrix3::identity(), // inertia tensor (kg*m^2) false, // is static (boolean) ); 
In this example, the RigidBody is initially located at the origin (0, 0, 0) with zero initial velocity, acceleration, angular velocity, and angular acceleration. It has a mass of 1 kilogram and is not a static object (i.e. it is free to move and can be affected by external forces and torques). The inertia tensor is initialized to the identity matrix, which assumes the RigidBody has uniform density and is symmetric about its center of mass.
📷Please create a new rigid body representing a simple cessna 150 airplane sitting at the origin.
Sure, here's an example of creating a new RigidBody representing a simple Cessna 150 airplane sitting at the origin:
rustCopy code
// Define the dimensions of the Cessna 150 airplane in meters let length = 7.31; // Length of the airplane (m) let wingspan = 10.16; // Wingspan of the airplane (m) let height = 2.69; // Height of the airplane (m) // Define the mass and inertia tensor of the airplane in SI units let mass = 670.0; // Mass of the airplane (kg) let ixx = 3435.0; // Moment of inertia about x-axis (kg*m^2) let iyy = 16020.0; // Moment of inertia about y-axis (kg*m^2) let izz = 18200.0; // Moment of inertia about z-axis (kg*m^2) let ixy = 0.0; // Product of inertia about xy-plane (kg*m^2) let ixz = 0.0; // Product of inertia about xz-plane (kg*m^2) let iyz = 235.0; // Product of inertia about yz-plane (kg*m^2) // Define the initial position and orientation of the airplane let position = Vector3::new(0.0, 0.0, 0.0); let orientation = Quaternion::from_euler_angles(0.0, 0.0, 0.0); // Define the initial velocity, acceleration, and angular velocity of the airplane let velocity = Vector3::new(0.0, 0.0, 0.0); let acceleration = Vector3::new(0.0, 0.0, 0.0); let angular_velocity = Vector3::new(0.0, 0.0, 0.0); // Define the initial angular acceleration of the airplane let angular_acceleration = Vector3::new(0.0, 0.0, 0.0); // Define the new RigidBody representing the Cessna 150 airplane let mut cessna_150 = RigidBody::new( position, velocity, acceleration, orientation, angular_velocity, angular_acceleration, mass, Matrix3::new( ixx, ixy, ixz, ixy, iyy, iyz, ixz, iyz, izz, ), false, // not a static object ); 
This example defines the dimensions of the Cessna 150 airplane in meters, as well as the mass and inertia tensor in SI units. The initial position and orientation are set to the origin and no initial velocity, acceleration, or angular velocity or acceleration are applied.
submitted by farning10 to artificial [link] [comments]

2023.03.27 02:57 Pink-Mcflurry My first official bookshelf. Moved around a lot so never had one and always gave away books. Now I have a place to showcase my books. (Some of my books are not here because friends are borrowing them)

My first official bookshelf. Moved around a lot so never had one and always gave away books. Now I have a place to showcase my books. (Some of my books are not here because friends are borrowing them) submitted by Pink-Mcflurry to bookshelf [link] [comments]

2023.03.27 02:23 Test19s Silly scenario, but what if a) Louisiana reached Bangladesh population density in the early 20th century b) experienced a parallel great migration to that of Anglo African-Americans and c) got absolutely rekt by urban renewal in the north?

Silly scenario, but what if a) Louisiana reached Bangladesh population density in the early 20th century b) experienced a parallel great migration to that of Anglo African-Americans and c) got absolutely rekt by urban renewal in the north? submitted by Test19s to 19_Skylines [link] [comments]

2023.03.27 02:23 Test19s Silly scenario, but what if a) Louisiana reached Bangladesh population density in the early 20th century b) experienced a parallel great migration to that of Anglo African-Americans and c) got absolutely rekt by urban renewal in the north?

Silly scenario, but what if a) Louisiana reached Bangladesh population density in the early 20th century b) experienced a parallel great migration to that of Anglo African-Americans and c) got absolutely rekt by urban renewal in the north? submitted by Test19s to imaginarymapscj [link] [comments]

2023.03.27 02:01 MerkadoBarkada Alternergy IPO was an unremarkable success (M:Mar27)

Happy Monday, Barkada --

The PSE gained 66 points to 6602 ▲1%

Congrats to Vince R for winning the P500 Grab Food voucher draw for entering his ALTER allocation into my ALTER IPO Allocation Poll!
Congrats also to ReaperCraft07 for correctly predicting ALTER's first-day performance ("breakeven"). I'm trying to figure out a way to expand this fun contest beyond Reddit, but I'm not quite sure how to do that.
Thanks to Jullian Masangcay and Mark Crismon Santarina for the positive feedback on my FB-based test of whether its better to send the main post, REIT Index post and IPO Index post as one single post (instead of individually as I do now). I'm going to expand the experiment this week to Twitter and Investa as well, so please let me know what you think (all feedback is appreciated!)
Shout-outs to Dominic Espinosa, Marcos Maharlika-watan, Jonathan Burac, mArQo, SE Accounting Solutions Philippines, meloi, CHARToons, avenmicjohn, Lance Nazal, LanAustria, Justn, Evolves Capital, Inc., Dividend Pinoy PGG, Rolex Jodieres, Palaboy Trader, Antonio Javier, Chip Sillesa, and Jing for the retweets, and to Greenvest and Grow PH, Froilan Ramos, Jayvee Menil, Francisco Pascual, Evolves.co, and Mike Ting for the FB shares!

In today's MB:

Daily meme Subscribe (it's free) Today's email

▌Today's sponsor: Fruitas Holdings

▌Main stories covered:

  • [RECAP] Alternergy finishes flat after a quiet first day... Alternergy [ALTER 1.28 unch] [link] started its life on the PSE on the wrong-side of its ₱1.28/share IPO offer price, with the first few trades of the day coming in at around ₱1.22/share (-5%). While the price climbed consistently, it took nearly an hour, until 10:30 AM, for the organic demand (or the stabilization fund) to push ALTER back up to break-even. The price even pushed into gain territory, reaching ₱1.30/share, but it do so on very light volume. The stock dipped back down below the offer price when trading started up again after lunch, but the price was stickier to the offer price from that point forward. ALTER ended the day at ₱1.28/share, which is exactly what IPO buyers paid.
    • MB: That start in the red got a lot of people on forums asking questions like, “Where’s the stabilization fund?” While ALTER had (and still has) a stabilization fund, just remember that stabilization funds are not a guarantee that an IPO won’t go below the offer price. They are a potential source of artificial demand, but the stabilization agent -- the bank or brokerage charged with administering the stabilization fund -- gets to use its own discretion as to how that fund is deployed. If the price is below offer, as it was for the first hour of ALTER’s life, the stabilization agent has the authorization to make purchases on the open market with the goal of manipulating the stock’s price upward, but it gets to choose when to engage and to what extent. The stabilization fund has a limited lifespan (1 month max) and a finite amount of resources that it can use to achieve that goal, so there is a little bit of gamesmanship when it comes to how the agent will deploy the funding. We won’t know if the stabilization fund was used yesterday until ALTER makes its first stabilization fund disclosure, but it’s something that I will continue to track so that I can give everyone a better idea of how the fund is doing and how long it might live.
  • [COMING] UP Quick look at the week ahead... Now that the central bank “stuff” is all out of the way, we get to pay closer attention to analyzing FY22 earnings reports and tracking the progress of the various equity-raising transactions that the PSE has scheduled to complete in the near future. Today is the day that the Upson International [UPSON 5.50] IPO offer period will end (the IPO is on Monday of next week). Tomorrow (Tuesday), the Megawide [MWIDE 3.07 ▲1.7%] follow-on offering starts, with shares of the preferred series MW₱5 going on sale for ₱100.00. On Thursday, the Cemex Philippines [CHP 1.03 ▼1.0%] tender offer will settle, which means that all the shareholders that tendered their shares in CHP to CHP’s parent company will finally receive the cash payment for those committed shares. Then, on Friday, the Philippine Business Bank [PBB 9.48 ▼3.3%] stock rights offering (SRO) shares will list. We’ll also get a slew of FY22 earnings reports from Nickel Asia [NIKL 6.61 ▼1.3%], Citicore Energy REIT [CREIT 2.48 ▲0.4%], D&L Industries [DNL 7.65 ▲0.1%], JG Summit [JGS 52.45 ▲3.5%], Monde Nissin [MONDE 11.34 ▲2.5%], and MacroAsia [MAC 4.92 unch].
    • MB: Underneath all of this is the common denominator of valuations. This is really “valuation season”, with equity raising forcing large investors to do a round of comparables and forward projections to invest in IPOs, and everyone else consuming the regular annual reports of the PSE companies to track progress to assumptions and models, to update price projections, and further expand or contract the valuations for those companies. The next two weeks will feel pretty action packed for price-action traders and value traders alike.
  • [NOTES] Quick takes from around the market...
    • **Holcim [HLCM 3.87 ▼0.8%] [link] FY22 net income of ₱0.9 billion, down 63% from its FY21 net income of ₱2.6 billion. Gross revenues were down marginally to ₱26.6 billion, from ₱26.9 billion. HLCM said that, excluding a one-off expense in Q3, its net income would have been ₱1.6 billion for FY22, which still represents a drop of 38%. The company said that its “excellence” allowed it to increase prices in an attempt to offset the inflationary pressure on its unit costs for coal, fuel, and power, but that the “surge” in the price for fuel and energy caused costs associated with these inputs to rise 60%, overwhelming those price increase efforts and other cost-cutting efforts.
      • MB Quick Take: Inflation is eating HLCM alive, and what is happening to HLCM here is a good demonstration of how elevated inflation will impact the average consumer going forward. The increased input prices for HLCM cause HLCM to raise its prices, which in turn, become increased input prices for new construction projects like dams, roadways, condo towers, and housing developments. The increased inputs for those developers cause the developers to raise their prices.
    • **Citicore Energy REIT [CREIT 2.48 ▲0.4%] [link] said that it believes that it is fully indemnified against any negative results in the arbitration case it disclosed last week. The legal dispute in question was between Enfinity Philippines Renewable Resources (EPRR) and two contractors that assisted EPRR to build a solar power project. After the contract between EPRR and the two contractors was signed, EPRR was acquired by CREIT, but the project that forms the basis of the complaint was “carved out” prior to CREIT’s purchase of EPRR. CREIT’s Investor Relations department said that EPRR has provided indemnity protection to CREIT as part of the purchase agreement, which the IR team says “effectively relieved us of any monetary consequences or claims”.
      • MB Quick Take: *An indemnity clause is a very important one in the mergers and acquisition space. When you buy physical assets (like a car or a painting), you just get that physical assets, but when you buy a corporation (like when CREIT bought EPRR), you don’t just buy its products and facilities, you buy all of its rights and obligations as a legal entity as well. Doing due diligence on “all the potential legal liability with respect to everything the company did before right now” is (obviously) a rather overwhelming scope for a pre-deal legal review, so indemnification agreements become a more-efficient workaround to solve the problem. *
MB is written and distributed every trading day. The newsletter is 100% free and I never upsell you to some "iNnEr cIrClE" of paid-membership perks. Everyone gets the same! Join the barkada by signing up for the newsletter, or follow me on Twitter. You can also read my daily Morning Halo-halo content on Philstar.com in the Stock Commentary section, and in the Saturday edition of the Daily Manila Shimbun.

Subscribe here

Read today's full newsletter here

submitted by MerkadoBarkada to phinvest [link] [comments]

2023.03.27 01:53 spycrabHamMafia WC as Uranium City

WC as Uranium City submitted by spycrabHamMafia to OldWorldBlues [link] [comments]

2023.03.27 01:41 50wwaayyss [WTS] Demko, Pro-Tech, Spyderco, Kershaw, Civivi, Samsung + MORE!

Few randoms for sale, no trades... I mean, unless you have something really cool haha :-) Please message me if you have any questions. Thanks for looking!
  1. Terra Mundi B'Yond EDC River Wolf John Demko design M390 blade steel Full titanium construction #125 of 200 BNIB $275 PICS
  2. Pro-Tech Mordax 20CV blade steel Button lock Excellent condition, 2nd owner, was told it was carried 1x never cut $245 SPF PICS
  3. Spyderco Paramilitary 2 S110v steel Blurple scales Excellent condition, carried it a time or 2, never cut anything $165 PICS
  4. Kershaw Junkyard Dog 2 1725CB CPM-D2 cutting edge Galyean design Manufactured October 2008 Factory second, no idea why, no apparent issues Knife only, user, see pics Made in USA $145 PICS
  5. Civivi Scintilla Blade HQ exclusive, sold out D2 steel Black G10 scales BNIB sealed $55 PICS
  6. Samsung Galaxy Watch 4 Classic 46mm SM-R895U LTE, wifi, bluetooth Excellent condition, worn twice Has a tiny scratch on the rotating bezel, tried to photograph it, too small Clean IMEI, can be used on any carrier's network, or by bluetooth/wifi $200 PICS
  7. Samsung Galaxy S10e phone SM-G970U Blue color Excellent condition, was my wife's phone for like 3 months Screen protector installed since day 1 Includes the phone and 2 girly cases Clean IMEI, unlocked for any carrier $200 PICS
  8. 50 Ways Pocket Pocket EDC Hank Brand new Microfiber back Sewn in pocket $19 PICS
submitted by 50wwaayyss to EDCexchange [link] [comments]

2023.03.27 01:24 tarvolon Tarvolon Reads a Magazine (or Three): Reviews of Clarkesworld, GigaNotoSaurus, and F&SF (March 2023)

In January, I started a new series, which I hope to continue all year, in which I read and review sci-fi/fantasy magazines. I’ve settled on three publications to follow this year: my two favorites from 2022 and a third that has been a titan of the genre for decades. I’ve yet to have a month without at least three stories I really loved among these three publications, and March was no different. So let’s get to it.


In my eyes, the March issue of Clarkesworld was something close to their median issue: one story I loved, a bunch of stories I liked, and one story that didn’t totally click for me. Issues like this don’t have me struggling to pick a favorite, but with so much to like, they’re exactly the sort of months that keep me a loyal reader. And I suppose it doesn’t hurt that my favorite story in the March issue was my favorite Clarkesworld piece of the first quarter of 2023.
The opening piece, Love in the Season of New Dance by Bo Balder, promised just the kind of alien communication story that I usually love. But while the lead’s cicada-like interlocutor’s obsession with completing the mating ritual before death may be realistic, it wasn’t necessarily the storyline I was hoping for. In fairness, I’m not sure the lead was hoping for it either, and there’s an interesting story to be found in her trying to make the best of an observation much more intimate than expected, but I nevertheless found it difficult to connect.
Angela Liu’s Pinocchio Photography blends the story of a college student trying to choose a career—amidst both social and parental pressures—with an exploration of pictures and memory. They come together in the form of a job photographing reanimated cadavers participating in those family moments they no longer lived to see. The narrative was enough to hold my interest, but it was the fascinating discussion of family and benevolent fictions that really made it worth the read.
Fiona Moore’s The Spoil Heap is an excellent tale of a former coder finding a robot on a trash heap after a technological apocalypse sent society back decades. Extensive flashbacks establish the character’s pre-apocalyptic life, informing both the catastrophe itself and the lead’s response to her present-day find. Probably not a story that will shock veteran readers, but it’s well-executed and a lot of fun.
The issue’s sole novelette, Shari Paul’s Bek, Ascendent, tells the story of a woman who until recently thought herself the lone survivor of her people’s destruction being sent to support their resettlement and finding herself thrown in with old friends and old enemies. It’s a short novelette that neither surprises with the twists nor fully explores all the nuances of the sprawling empire taking charge of smaller people groups recovering from disaster—though it does gesture at more going on under the surface—but it’s a plenty engaging read nonetheless.
Shih-Li Kow’s Failure to Convert takes up the story of clones who must jump through hoop after hoop after hoop to earn the recognition of legal personhood, in a society becoming increasingly prejudiced against them. It’s a well-written and often chilling piece that offers more questions than answers.
My favorite piece in my issue, and my favorite Clarkesworld piece of the year so far, came from an outstanding new writer who appeared thrice on my 2022 favorites list and is already starting on 2023. Isabel J. Kim’s Zeta-Epsilon is a heartfelt piece about a man raised from birth to provide a human bridge to the ineffable mind of a starship. It opens with a puzzle box—why did Zeta commit suicide, and why didn’t Epsilon prevent it?—before diving into Zeta’s childhood, his intense connection with the ship, and the aspects of his life that became untenable. It’s a fascinating story with emotional depth and even a pinch of humor. This one is the whole package.
After the intensity of “Zeta-Epsilon,” the next story continues the AI theme with a little more levity. Louise Hughes’ AI Aboard the Golden Parrot tells of a sentient theme park pirate ship that has taken to the seas collecting robot castaways. There’s still some tension in their mission to rescue imprisoned drones, and the details turn into a bit of a whirlwind at time, but it’s a fun read all told.
The fiction section of the March issue closes with perhaps the most mind-bending piece in the set, Love is a Process of Unbecoming by Jonathan Kincade. The dreamlike narrative is told mostly from the perspective of an organism that has infected and perhaps possessed the human who would ordinarily be considered the main character. It starts as a tense survival story from an upside-down angle, with the perspective then widening to glimpse the family struggles that have led to the story’s main action. Utterly disorienting but skillfully told, I suspect this story would reward a second read.
March’s non-fiction section opens with a dive into the parallels between certain real-world plants and famous fiction landscapes, such as The Book of Koli, Annihilation, and Semiosis. The two interviews are with Nadia Afifi and the people behind Galactic Journey—a fanzine I’d heard of but didn’t know much about. I have to say, I love their concept, and I also love a pair of R.A. Lafferty stories appearing as honorable mentions on their 1960s Best of the Year posts.
Finally, Neil Clarke closes with an editorial on the difficulty of surviving as a genre magazine—particularly in light of Amazon’s decision to end their subscription program—and the decisions Clarkesworld is making in an effort to move forward as sustainably as possible.


GigaNotoSaurus rings in the month with what will be among their shortest pieces of the year, Once Measured, Twice Cut by Anna Martino, in which a half-human pharmacist in post-war England does his best to help both his human and his secret winged patients. It’s easy to read and immensely satisfying for readers who enjoy fantasy healers just trying to help people. Perhaps it all comes together a hair too neatly, but I was still happy to read it.

Fantasy & Science Fiction

After the January/February issue overwhelmed me with extended pieces tied to existing SFF universes, the March/April issue of The Magazine of Fantasy & Science Fiction was a breath of fresh air. The sixteen stories included just two novelettes and no novellas, and while a couple were situated within existing mythologies, this didn’t provide a true barrier to entry.
Let’s start with the pair of excellent novelettes, both of which brought their fair share of whimsy. Peter S. Beagle’s The Weremouse of Millicent Bradley Middle School has something of the flavor of an urban legend becoming all too real, with a witchy and vindictive math teacher terrorizing her students, who must either grovel or find a way to fight back. Sometimes a story just doesn’t have to break new ground to be an utter delight. Also delightful is Eleanor Arnason’s Mr. Catt, telling of the adventures of a six-foot, bipedal feline—in a world mostly consisting of ordinary humans, mind you—who decides on a whim to acquire a dragon. As you might imagine, things don’t go quite as expected, but they go wrong in an extremely entertaining way.
Circling back to the start of the issue, we find two short stories by authors I’d enjoyed immensely in longform and was excited to see here. Lavie Tidhar’s The Station Master offers an entertaining bit of slice-of-life in his Central Station universe, whereas Tade Thompson’s The Sweet in the Empty provides a heart-pounding tale of family and revenge in a fantastical long-ago Arabia. I’m not usually a reader of sword-and-sorcery, but I expect those who are will like this one even more than I did—which was quite a bit!
Continuing the theme of classic fantasy stories well-executed is Jonathan Louis Duckworth’s Spookman, the story of a man who can see spirits of the dead on a mission to find a runaway in a haunted forest. A fun read with an eminently satisfying conclusion.
Switching from fantasy to sci-fi, we find my favorite short story in the issue: M.H. Ayinde’s Piggyback Girl. The Black Mirror-like premise involves an influencer signing a contract allowing her followers to literally see through her eyes. The piece does demand that readers overlook the utter lack of genre-savvy by the lead character, but the sheer intensity of what comes afterward makes up for the credulity in signing the initial contract. The social commentary is on point, and the rising panic as the lead begins to see the walls rising on all sides makes for a truly gripping read.
The latter half of the issue didn’t blow me away quite like the first half, but I still found a lot to enjoy. There were a pair of looping, twisting sci-fi tales that have both reader and characters questioning what’s real. Matthew Lebowitz’s Ouroboros—about a scientist researching technology that can bring dreams to life—was a particular favorite, and Marlon Ortiz’s Mnemonic Longings was a solid tale of a sentient spaceship processing its relationship with the only human to survive a devastating extraterrestrial conflict.
Moonlight, Wing-Wake in Fog by Rick Hollon literalizes the concept of masking one’s true self from the world, telling of a group of shapeshifters trying to keep themselves safe from implacable colonizers bent on their destruction. It’s followed by Nuzo Onoh’s “The Madding,” which should appeal to readers who like some African folklore in their horror.
Amanda Dier’s Escape Velocity is a small-scale, personal tale about the transformations one undergoes while chasing their dreams, and how they live with those transformations. It’s followed by Lauren Bajek’s prose poem Pantoum on a Generation Ship, a flash piece whose title is a wonderfully clever play on the ultimate story. I have seen other reviewers marvel at the story for following the stringent repetition structure of the Pantoum style of poetry, but as someone unfamiliar with the form, the specifics passed me by. Still, a piece well worth reading!
Marie Vibbert’s The Subway Algorithm is Half-Constructed follows grad students talking about AI projects, but it’s much more a story of learning how to interact with other people—a subject in which every major character has their own struggles. K.C. Ahia’s Solar Boy also follows a lead with a fresh STEM degree, this time an engineer scrabbling for one of the scarce jobs working on spaceships. It’s not a hard read, but it’s simultaneously a story of trans belonging, a problem-solving sci-fi, and a story of intrigue and betrayal. For a piece as short as it is, it felt to me a hair overstuffed, and I would’ve appreciated just one or two of those elements having more time to breathe.
The issue closes with a pair of fun mythopoetic stories. Kathleen Jennings’ The Five Lazy Sisters is a fairy tale about…well, five lazy sisters looking for a way out of running a ferry for their whole lives. Then E. Catherine Tobler caps the issue with Remembered Salt, about a bewitched house flying up from its forest home and seeking adventure and pieces of memory. For readers who enjoy short fairy tales without too much emotional intensity, it’s an excellent pair to finish the issue.
As always, F&SF also has poetry and reviews, and this month had a fascinating dive by Arley Song into beloved genre magazines throughout history and what became of them. It’s a good read on its own and dovetails especially nicely with Clarke’s editorial.

March Favorites

This post, along with lots of other discussion of sci-fi and fantasy, both short-form and long, can also be found on my blog, www.tarvolon.com
submitted by tarvolon to Fantasy [link] [comments]

2023.03.27 01:23 50wwaayyss [WTS] Demko, Pro-Tech, Spyderco, Kershaw, Civivi + MORE

Few randoms for sale, no trades... I mean, unless you have something really cool haha :-) Please message me if you have any questions. Thanks for looking!
  1. Terra Mundi B'Yond EDC River Wolf John Demko design M390 blade steel Full titanium construction #125 of 200 BNIB $275 PICS
  2. Pro-Tech Mordax 20CV blade steel Button lock Excellent condition, 2nd owner, was told it was carried 1x never cut $245 SPF PICS
  3. Spyderco Paramilitary 2 S110v steel Blurple scales Excellent condition, carried it a time or 2, never cut anything $165 PICS
  4. Kershaw Junkyard Dog 2 1725CB CPM-D2 cutting edge Galyean design Manufactured October 2008 Factory second, no idea why, no apparent issues Knife only, user, see pics Made in USA $145 PICS
  5. Civivi Scintilla Blade HQ exclusive, sold out D2 steel Black G10 scales BNIB sealed $55 PICS
  6. 50 Ways Pocket Pocket EDC Hank Brand new Microfiber back Sewn in pocket $19 PICS
submitted by 50wwaayyss to Knife_Swap [link] [comments]

2023.03.27 01:11 MeLoNarXo What the actual hell is happening there?

What the actual hell is happening there? submitted by MeLoNarXo to hoi4 [link] [comments]

2023.03.27 01:08 Icanintosphess Just because I am a fanatical purifier it doesn't mean I can't be cultured!

Just because I am a fanatical purifier it doesn't mean I can't be cultured! submitted by Icanintosphess to Stellaris [link] [comments]

2023.03.27 01:03 PretendSupermarket68 Just opened my first box

Just opened my first box submitted by PretendSupermarket68 to basketballcards [link] [comments]

2023.03.27 00:52 Kratangg 「Dear Fellow Traveller」

Namesake: Dear Fellow Traveller by Sea Wolf
User: Kinsly
Tool/Phenomenon stand
Power: B (Currents can flow at a maximum of B power, but less intense forces are possible when setting up)
Speed: B (Currents flow at a maximum of B speed)
Range: C (Handprints can be applied by touch or anywhere within 15 meters. Once placed, handprints will remain fully functional until destroyed or deactivated, no matter how far away the user may be. Handprints can be dispelled at any distance)
Durability: E (Handprints can be easily dusted away as if they were chalk, but are unaffected by the stand’s own currents)
Precision: C (The user has rough control over the speed and power of the flows they summon, and can fairly precisely apply handprints where they desire)
Potential: C
Glowing white/tealish handprints, similar in hue to a bio-bay. The user’s silhouette can be seen within the fingerprints, and change to match their appearance in real time.
The user himself is exactly 5’0, with a large, wide brimmed hat that brings him to 5’5. Entire body above shin-high hiking boots is obscured by a 360 degree cloak seemingly made of stitching and zippers. He can extend a hand out of any opening with ease, un and rezipping as he goes in a single fluid motion. Dark blonde hair with braids framing his face, and an even stubble. Long hooked nose and a smile that literally shines, with almost every other tooth silvered.
「Dear Fellow Traveler」 places glowing handprints on or inside of objects and surfaces, which create forceful, self contained currents that pull anything coming into contact with them ‘downstream’ at a maximum of B power between the handprints. The currents sharply begin at the first handprint, but naturally disperse at the endpoint, spreading like an estuary of wind. Prints can be applied midair or om fluids, but degrade quickly and require a consistent flow of new connections to retain the flow.
Currents function with a cylindrical laminar flow, forming a cylinder of force vectors inherent to the path that primarily intakes air at the origin point, but does not significantly interfere with the atmosphere along its length; air particles in direct contact with the current are taken in equally along the total surface area, minimizing the impact. A solid touching or intersecting with the pathing at any point will bypass this, immediately dragging it downstream.
A new handprint can be placed and ‘linked’ to an existing current, extending its path while removing the print that previously served as its endpoint. The user can determine the arc the current takes between these points, or designate it as a straight line. Thirty handprints can exist at a time, with the oldest instances vanishing as new ones are created beyond the cap.
Perhaps because of this ability, the user is acutely aware of the motion within fluids around him, and more than capable of utilizing their currents. Whether it be constructing a hobble to split the wind of a storm, properly setting a fire safe from premature extinguishing, sailing or swimming in such a way to take advantage of the winds and currents, or even following airflow to the opening of a cave.
This applies to his own stand as well, allowing him to efficiently ride his own waves, and he’s one hell of a kite-flier.
User information/backstory:
Everywhere, there are wanderers. Transient folk who move from place to place, never lingering for long. One such soul is the fleeting presence of Kinsly. Nothing brings Kinsly greater joy than the aforementioned journey, migrating from place to place to soak in the beauty of his current resting spot. Experiencing new locations and meeting new people, even for a short time, is wonderful, and Kinsly will experience such elation many times, without being tethered to any one area.
Nimble with the endurance for long days of travel, Kinsly understands there are many precautions that must be taken with such a lifestyle. Of course, one necessarily develops a set of crucial set of skills after having spent decades of exploration drifting about every environment imaginable. He has weathered woodlands, plains, desert, cityscapes, lakes, oceans, tundras, what have you, and is more than capable of self-sustainably providing for himself in all situations. Makeshift lodging, procuring and preparing food from what can be gathered, traversing all manner of terrain, avoiding credible threats, doing so without a trace of his presence once gone are all within his arsenal. This includes hunting, rudimentary trapmaking, and basic self defense and medical care.
Throughout it all Kinsly is a friendly, calming presence, speaking in a tone as swift and soft as the wind. He travels the world, enjoying the presence of others and solitude alike (Even if the latter is preferable), while contemplating what at least he considers to be wisdom and interesting concepts. He is more than willing to listen to others’ interests, opinions, or problems, while providing his own insights. While generally open, Kinsly never speaks of his early years, and will simply decline to answer or guide the conversation elsewhere. Whether he remembers his childhood at all is unclear, and he acts as if he has had his stand since the dawn of time: simply an extension of his own being, his own hands. He admires his surroundings, whether they be nature, architecture, or simply a culture he is yet to experience.
Kinsly rarely stays in one place for very long, lest he or it overstay their welcome to each other. He wanders alone, with attachment or long term connection to anything not necessary in his mind. This extends to extensive material possessions that would prevent traveling light, as well. He does not desire nor relate to gathering objects, and would prefer to remember something rather than own it. This especially applies to himself. Do not miss Kinsly; remember him.
Despite his demeanor, he can swiftly become anxious or insecure, and spends much time contemplating these issues (His anxiety, as well as the bitterness towards it he is self-aware of) he sees within himself. Whether the motivation for it, or a byproduct, this trait is deeply intertwined with his lifestyle. Extended contact with others, or large enough groups make him uncomfortable. Thus, Kinsly spends much of his time alone-once again, whether this has induced anxiety around others or is the natural consequence of introversion is irrelevant. Kinsly resents this anxiety itself, seeing it as an insecure weakness despite his preference to be alone. This anxiety additionally seems to reveal itself in other cases of personal failure. Lots of time alone gives ample time to think, which consequently leads back to such thoughts in an unfortunate cycle.
No matter, there is plenty of time to contemplate and many new experiences to be had in the meantime. To that end, remember-a series of odd chalkprints or aberrant winds may be the sign of a possible new friend in town, however fleeting the encounter may be.
submitted by Kratangg to fanStands [link] [comments]

2023.03.27 00:50 upinflames_ How would you solve this problem?

submitted by upinflames_ to ACT [link] [comments]

2023.03.27 00:50 jskid713 Why I left the Mormon church

There are a lot of these stories in this subreddit, but I thought I'd go ahead and submit mine in case it resonates with someone.
First of all, let me say that I have many friends and family who are still members of the Church of Jesus Christ of Latter-Day Saints. Many of them are smarter, more knowledgeable, and more Christlike than I’ll ever be. I deeply respect these people. The fact that I’ve made a different choice than them means that I disagree with them on some fundamental issues. Any explanation as to why I made my choice may come across as antagonistic to those who have made a different choice. But I continue to love and respect my friends and family who remain faithful to Mormon doctrine.
I believe there are two main hooks that can keep someone in a church. First, you believe its truth claims, and second, you believe in the effect it has on your life and the world (e.g. it is an expression of your values, you believe it is a force for good, it makes you a better person, it gives you peace, etc.) One of these hooks had to hold for me to remain in the Mormon church. If I believed in the truth claims, that the Mormon church is truly God’s kingdom on earth, that it is the only “true” church, that it is the only path to happiness and eternal salvation, then it wouldn’t matter if it conflicted with my personal values or opinions — I would need to get with the program. And if I loved the effect it had on my life and my family, and I believed the Mormon church was the best place for us to develop into healthy, happy people while also doing good in the world, then I could live with having a shared mythology about Joseph Smith and the Book of Mormon that wasn’t based in fact. When neither of those hooks held, the only rational choice was for me to leave the church.
I won’t go into detail about the problems with specific Mormon truth claims as there are many other sites that do that thoroughly (see LDS Discussions and Letter for My Wife). When I was a member, I didn’t allow myself to read sites like those. I was taught to only study “faith-promoting” sources (i.e. church propaganda). I was taught that if I read “anti-Mormon” literature, I would lose “the Spirit” and be subject to the influence of Satan, which would lead me away from the church. And I liked my life in the church. I didn’t really know what life outside the church was like, but I loved what I had and didn’t want to mess with it. So I would brush aside all sorts of highly problematic issues (e.g. anachronisms in the Book of Mormon, the weirdness of the temple endowment and garments, Joseph Smith coercing teenagers and married women to have sex with him, etc.)
But as more of these issues started to bubble up, and as I started to notice areas where the church’s teachings didn’t align with my values (as discussed below), I reached a point where I decided that I was not going to be afraid of information. I would seek out facts and truth without a predetermined conclusion in mind. That meant that if I learned the church was wrong, I would leave. That’s important, because I don’t believe anyone can look at the church objectively unless they’re willing to walk away from it.
I read the CES letter. I read rebuttals of the CES letter. I listened to Mormon Stories. I read church apologetics on FAIR Latter-Day Saints. I read history books like Early Mormonism and the Magic World View and The Mormon Hierarchy: Origins of Power. I read Under the Banner of Heaven. Once I allowed myself to objectively weigh the evidence (including my own experiences), the pieces fit into place very quickly. All my questions, concerns and uncertainties about the church became astonishingly clear. There were mountains of evidence that the Mormon church’s truth claims were simply not true. I could no longer say that I believed “the church is true” and consider myself an honest person.
But again, I could stay in the church as a non-believer (or at least as someone that didn’t believe everything literally) if the church was consistent with my values. In many ways it is, but there are some important ways in which it is not. Below are some things I value, some things I don’t care about, and some things I want for my daughters. This is not an exhaustive list, but I’m hoping it’s enough to illustrate why I can no longer remain in the church.
Some things I value:
Some things I don’t care about:
Some things I want for my 4 daughters:
Some say that none of this matters if you have a witness of the church’s divinity through the Holy Ghost. Essentially that means that personal feelings should trump all arguments, evidence, rational thought and moral principles. But spiritual feelings aren’t unique to the Mormon church. Nor are Mormons the only ones who tell members to ignore all outside information and instead ask God for direction — that is a common tactic used by high demand religions. I believe it is important to seek out a connection with God, and I believe that God has guided me at pivotal moments in my life, but I don’t believe that spiritual feelings are the ultimate source of truth. And I hope this isn’t insulting, but I don’t personally know anyone who believes in the Mormon church after fully investigating it without a predetermined conclusion in mind. People in the church want to believe in it, so they find a way to get there even in the face of contradictory evidence. In my experience, people who join the church as converts don’t know the whole story. If there is anyone out there that knew about the complete history of the Mormon church, warts and all, and then decided to be baptized, I would love to talk to that person.
I’m grateful for some of the positive things the church has instilled in me. I believe it has made me a hopeful person that wants to do the right thing. But I’ve reached a point in my life where the church is no longer right for me. If any of my friends or family reading this wants to discuss issues with the church, challenge me, or give me arguments or evidence I might have missed, I’d be more than happy to have an open two-way discussion. Until then, know that I love you and hope that our mutual respect for each other can continue.
submitted by jskid713 to exmormon [link] [comments]

2023.03.27 00:01 AgentOooOoo9 Funko collection update

Funko collection update submitted by AgentOooOoo9 to TopPops [link] [comments]