Discover and read the best of Twitter Threads about #Inference

Most recents (4)

6 Sources of Knowledge According to Ancient Indian Philosophies

-

A Brief Thread on Pramana-s (Sources or Evidence of Knowledge)

(1/8)
Pratyakṣa (Perception)

This is the most direct form of knowledge, obtained through the five senses (sight, hearing, taste, smell, touch) or the mind. It's considered to be the most immediate and reliable source of knowledge. #Pratyaksha #Perception

(2/8)
Anumāna (Inference):

Inference is the process of deriving conclusions based on perceived evidence or premises.

It's a critical tool for developing knowledge when direct perception isn't possible and is widely used in philosophical debates. #Anumana #Inference

(3/8)
Read 8 tweets
🚀 We are live! Here is the correct streaming link for all of today's discussions and performances, starting with a panel on complex time with David Krakauer, James Gleick, Ted Chiang, and David Wolpert in a few moments (measured linearly...):

#IPFest
"One of the ideas we had with #InterPlanetary was, 'What would it take to make science hedonistic? And instead of telling people to do it, you'd have to tell people to STOP doing it?"

- SFI President David Krakauer sets the tone for this weekend's celebrations
#IPFest Image
David Krakauer: "Do you have a favorite model or metaphor for #time?"

@JamesGleick: "You've already mentioned a river; that's everybody's favorite. Borges said time is a tiger. People talk about it as a thread. We ONLY talk about time in metaphors." Image
Read 141 tweets
"#Imitation vs #Innovation: Large #Language and Image Models as Cultural #Technologies"

Today's Seminar by SFI External Prof @AlisonGopnik (@UCBerkeley)

Streaming now:


Follow our 🧵 for live coverage.
"Today you hear people talking about 'AN #AI' or 'THE AI.' Even 15 years ago we would not have heard this; we just heard 'AI.'"
@AlisonGopnik on the history of thought on the #intelligence (or lack thereof) of #simulacra, linked to the convincing foolery of "double-talk artists":
"We should think about these large #AI models as cultural technologies: tools that allow one generation of humans to learn from another & do this repeatedly over a long period of time. What are some examples?"

@AlisonGopnik suggests a continuity between #GPT3 & language itself:
Read 14 tweets
#Highlights2021 for me: our #survey on efficient processing of #sparse and compressed tensors of #ML/#DNN models on #hardware accelerators published in @ProceedingsIEEE.
Paper: dx.doi.org/10.1109/JPROC.…
arXiv: arxiv.org/abs/2007.00864
RT/sharing appreciated. 🧵
Context: Tensors of ML/DNN are compressed by leveraging #sparsity, #quantization, shape reduction. We summarize several such sources of sparsity & compression (§3). Sparsity is induced in structure while pruning & it is unstructured inherently for various applications or sources. Various sources induce stru...Common structures of sparsi...
Likewise, leveraging value similarity or approximate operations could yield irregularity in processing. Also, techniques for size-reduction make tensors asymmetric-shaped. Hence, special mechanisms can be required for efficient processing of sparse and irregular computations.
Read 12 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!