The Paleo Cyborg Podcast cover logo
RSS Feed Apple Podcasts Overcast Castro Pocket Casts
English
Popular podcast
Non-explicit
anchor.fm
5.00 stars
55:31

The Paleo Cyborg Podcast

by Steve

Ai

Copyright: Steve

Episodes

Part IV of Craig Weinberg’s epic podcast

44m · Published 27 Jan 18:50
Wherein free will is explained along with How it’s Lensing all the way down The interesting part of free will is the Will Reason is slave to the passions” Stoplight metaphor A different timescale https://multisenserealism.com/the-competition/departing-from-the-consensus/determinism-tricks-of-the-trade/ Holos is present in every fragment Versing the unity Nothing is ever repeated A triangle is eternal The third side makes itself for free if you have two sides Spirals instead of looping The difference between a labyrinth and a maze The maze is finite and temporal The labyrinth is infinite possibility

Episode 16 part 3 with Craig Weinberg

32m · Published 26 Jan 03:32
The foundation is laid for explaining Free Will and meta-representational Consciousness without resorting to concepts such as Emergence Map and territory Chris Langan free will is creating space and time via manifolds and fractal recursion You are the universe interacting with itself Flatland Chris Fields Infinite regression recursively stabilizes Bruce Lipton Consciousness via external inputs Nikola Tesla energy frequency and vibration His brain is but a transmitter and a receiver “Mind-expanding” drugs suppress the ego and survival mechanisms

Free Will explained in five minutes

11m · Published 26 Jan 03:12
With a five minute preamble introducing and explaining the map and the territory as well as simulations and representations

Map vs Territory Dawn of Justice

2m · Published 25 Jan 01:59
A brief reframe of “the map is not the territory”

Episode 16 part 2 with Craig Weinberg 🧙‍♂️

27m · Published 25 Jan 01:22
The map and the territory enter the chat. Amplituhedrons simplify the equations instead of complexifying the math Emergence Map and territory Chris Langan free will is creating space and time via manifolds and fractal recursion You are the universe interacting with itself Flatland Chris Fields Infinite regression recursively stabilizes Bruce Lipton Consciousness via external inputs Nikola Tesla energy frequency and vibration His brain is but a transmitter and a receiver “Mind-expanding” drugs suppress the ego and survival mechanisms “You can’t do impossible things”

Craig Weinberg Returns episode 16 part 1

28m · Published 24 Jan 04:50
Wizard of consciousness Craig Weinberg returns once again to explain multisenserealism.com and more Show notes: David Chalmers Hard Problem latest blog post https://multisenserealism.com/2023/06/20/youtube-is-there-really-a-hard-problem-of-consciousness/ Aesthetic participatory phenomenon Descartes “ I think therefore I am “ Wavelengths and frequencies of light and the colour red You should measure photons and matter against the ruler of redness and blueness, compare and contrast (optical illusions, for example) Dream matters “The brain filters reality” Simulation theory is a metaphor Map vs territory Physic and evolution and natural selection Donald Hoffman Neo - Platonism Newton and astrology and alchemy Epicycles added to science to explain errors and correct them The epitome of this is string theory

Caio Temer Episode 13 part 2

1h 13m · Published 17 Jan 15:26
Caio Temer speaks on LLMs and Ai and AGi & the possibility for them becoming conscious. Collaborative efforts involve creating language models using Facebook's FastText, demonstrated in Colab notebooks and GitHub repositories: - [Colab 1](https://colab.research.google.com/drive/1wVQrpzyY-SkCZTRZCcP6xJGRdg1ik0jR#scrollTo=pIW6-VwMuRlz) - [Colab 2](https://colab.research.google.com/drive/1oABIZr1xiIu7DKc7AbbZfeBJFLkZW6Ep#scrollTo=XfR-3PpPYoFU) - [GitHub Repository](https://github.com/rtkclouds/fast-js-language-model) Additionally, you can find more details in the provided [GitHub gists](https://gist.github.com/rtkclouds/50b81d10736793f07cdca354516e8757) and [here](https://gist.github.com/rtkclouds/a6ee9afd96461ca94b3e9c22f78bda3a). For further insights, you can explore the [X.com link](https://x.com/canalCCore2/status/1735044356535472278?s=20). Part 2 https://canalccore.medium.com/innovating-beyond-openais-q-a-novel-reinforcement-learning-approach-to-nlp-09d4b58da1ea https://ai.plainenglish.io/leveraging-local-indexes-in-language-models-to-enhance-contextual-learning-3d2a8f98fde5 https://canalccore.medium.com/language-models-still-have-basic-structural-problems-9f88b04576c

Caio Temer Episode 13 part 1

1h 21m · Published 17 Jan 14:26
Caio Temer explains LLMs and machine learning. found that generalized learning happens after 10k epochs with a considerable amount of data per epoch, are there studies in this regard? ChatGPT Yes, there are studies related to generalized learning after many times, with a large amount of data per season. A notable example is the "Deep Bootstrap Framework", presented at ICLR 2021. This framework makes a connection between generalization and online optimization. He compares a "real world", where a model is trained on a finite number of samples by multiple times, with an "ideal world", where each training step uses fresh samples from a virtually infinite data set. The research found that models that train quickly in infinite data are the same as those that generalize well in finite data. This study offers a new perspective on design choices in practice and provides a theoretical path to understand generalization. Get started with ChatGPT Caio Temer show notes: Part One summary interesting topics for trading- -how I created the first bots -how models combine language models with autonomous robots -why conventional methods don't work about language models: -language models are simpler to obtain with other structures -problem in the current structure and how I solved it - development of intuitive neural networks using hash abstraction Caio developed a way for neural networks, through cryptography techniques that use sum, to learn to abstract and understand deeply quickly created a lib of a tiny language model that learns to predict all tokens at an absurd speed by using distance vector embeds instead of tokens Transformers: [](https://arxiv.org/abs/2205.13504) for example, if the network is exploding, there's something wrong with a component, or the update rate is wrong If the rate is right, the linearities are wrong, just exchange some linear or more linear connection like relu for tanh or add a tanh. softmax in deep layers as a projector reduces network convergence time by forcing layer alignment training the transformer one layer at a time, adding a copy of the first layer and freezing the first accelerates training and improves retention In sequence for sequence, a permuted linear dense solves 99% of problems in less time and better than the transformer. Giant embeds work like entire networks, only needing one activation at the end after reshaping “1 tanh layer of 1 million units solves everything better than any other network with the same number of parameters, at least in the tasks I use, I always start with it, then see how to reduce” Collabs: making a language model using Facebook's FastText https://colab.research.google.com/drive/1wVQrpzyY-SkCZTRZCcP6xJGRdg1ik0jR#scrollTo=pIW6-VwMuRlz https://colab.research.google.com/drive/1oABIZr1xiIu7DKc7AbbZfeBJFLkZW6Ep#scrollTo=XfR-3PpPYoFU https://github.com/rtkclouds/fast-js-language-model https://x.com/canalCCore2/status/1735044356535472278?s=20 GitHub gists: [](https://gist.github.com/rtkclouds/50b81d10736793f07cdca354516e8757) [](https://gist.github.com/rtkclouds/a6ee9afd96461ca94b3e9c22f78bda3a)

Forrest & Ai part 2: Wittgenstein Re-rises

51m · Published 17 Jan 04:16
Episode 12, Part 2 Twitter / X account https://x.com/fr_and_ai?s=21 YouTube channel https://youtube.com/@FRandAI?si=iHpZtMYXkS0LtLP- Patreon [](https://www.patreon.com/frandai?utm_campaign=creatorshare_fan) Aze Alter account; https://x.com/azealter?s=21 Incomprehensible weirdness Backwards causation Ruliad : entangled limit of all possible computations Branchial space [](https://knowyourmeme.com/memes/shoggoth-with-smiley-face-artificial-intelligence) Stephen La Berge lucid dreaming Annihilation notes meme https://slate.com/culture/2018/02/annihilation-co-composer-ben-salisbury-explains-the-musical-cue-from-the-alien.html https://slate.com/culture/2018/02/when-does-the-weird-musical-noise-finally-happen-in-annihilation.html https://slate.com/culture/2018/02/im-obsessed-with-annihilations-five-note-melody-sound-effect.html https://x.com/gracespelman/status/1082862350846312448?s=46 Binaural Gateway Optimizing for attention not for attention span Rewatchability Engagement Mid journey 6 Very unpleasant dreams Rabbit character ChatGPT The Judgement The Bureau The Penal Colony The Metamorphosis the VR experience Goat simulator The Hunger Artist Unlimited Dream Company Satisficing Christopher Nolan Focused Alex Garland Kenneth Stanley Serendipity and novelty Peter Birkhauser Jungian Tesla channeling Ian McGilchrist left brain right brain split [](https://youtu.be/lgnMyF-o0sQ?si=raUB-6NKEuJwDjUQ) Duplass brothers Pascal’s Wager Ursula Le Guinn

Forrest & Ai episode 12 part 1

1h 7m · Published 16 Jan 03:38
Director and artist Forrest Rice joins the show to talk about the future of art, Ai, philosophy, Wittgenstein, the Ruliad, HP Lovecraft, Franz Kafka and much more. Show notes: Twitter account https://x.com/fr_and_ai?s=21 YouTube channel https://youtube.com/@FRandAI?si=iHpZtMYXkS0LtLP- Patreon [](https://www.patreon.com/frandai?utm_campaign=creatorshare_fan) brief intro Kafka’s short stories The Colony Indie film making Wittgenstein Loss function Joscha Bach Alan Turing machines Halting problem Logical positivism Functionalism Duty of Genius by Ray Monk “I play the orchestra “ —Steve Jobs Alfred Leon Vitali Filmworker Peacock Kubrick documentary [](https://m.imdb.com/title/tt6851066/) Bo Burnhan INSIDE Netflix Aze Alter Capital of Conformity Loss function returns Indrid Cold derinberger Mothman Prophecies Roko’s Basilisk Gaia Christmas gnome elf CIA gateway Incomprehensible weirdness Backwards causation Ruliad : entangled limit of all possible computations Branchial space [](https://knowyourmeme.com/memes/shoggoth-with-smiley-face-artificial-intelligence) Stephen La Berge lucid dreaming Annihilation notes meme Binaural Gateway Optimizing for attention not for attention span Rewatchability Engagement Mid journey 6 Very unpleasant dreams Rabbit character ChatGPT

The Paleo Cyborg Podcast has 18 episodes in total of non- explicit content. Total playtime is 16:39:22. The language of the podcast is English. This podcast has been added on December 24th 2023. It might contain more episodes than the ones shown here. It was last updated on May 17th, 2024 10:41.

Similar Podcasts

Every Podcast » Podcasts » The Paleo Cyborg Podcast