Saltar para: Posts [1], Pesquisa [2]

Geopolítica e Política

Lusa - Lusística - Mundial

Geopolítica e Política

Lusa - Lusística - Mundial

Talmudism

16.06.23 | De Situ Orbis

A full set of the Babylonian Talmud

A full set of the Babylonian Talmud

 

What is the Difference Between Torah and Talmud?


By waklvyct at Jewish Police Association on September 23, 2013

The Torah refers to the Five Books of Moses (also known as the Pentateuch), which for Jews is the core part of the Jewish faith and the source of the main laws and ethics. The Talmud is a record of the rabbinic debates in the 2nd-5th century on the teachings of the Torah, both trying to understand how they apply and seeking answers for the situations they themselves were encountering. An example of the former is “Thou shall not kill” – but how does this apply to suicide or self-defence or time of war? An example of the latter is : what does Jewish teaching say about going to a bath-house in which there is a Roman statue (does it count as an idol and mean Jews cannot go there ?). The Torah and Talmud together form the background of Judaism today, with the Biblical commands often being re-interpreted in the light of the Talmudic debates (eg ‘an eye for an eye’ does not mean literal physical retribution but financial compensation for the pain, public embarrassment, time off work, medical fees etc).

 

American Pravda: Oddities of the Jewish Religion 

The Surprising Elements of Talmudic Judaism


Ron Unz •  The Unz Review • July 16, 2018 • 7,800 Words • Has Comments

Israel Shahak and the Middle East

About a decade ago, I happened to be talking with an eminent academic scholar who had become known for his sharp criticism of Israeli policies in the Middle East and America’s strong support for them. I mentioned that I myself had come to very similar conclusions some time before, and he asked when that had happened. I told him it had been in 1982, and I think he found my answer quite surprising. I got the sense that date was decades earlier than would have been given by almost anyone else he knew.

Sometimes it is quite difficult to pinpoint when one’s world view on a contentious topic undergoes sharp transformation, but at other times it is quite easy. My own perceptions of the Middle East conflict drastically shifted during Fall 1982, and they have subsequently changed only to a far smaller extent. As some might remember, that period marked the first Israeli invasion of Lebanon, and culminated in the notorious Sabra-Shatila Massacre during which hundreds or even thousands of Palestinians were slaughtered in their refugee camps. But although those events were certainly major factors in my ideological realignment, the crucial trigger was actually a certain letter to the editor published around that same time.

A few years earlier, I had discovered The London Economist, as it was then called, and it had quickly become my favorite publication, which I religiously devoured cover-to-cover every week. And as I read the various articles about the Middle East conflict in that publication, or others such as the New York Times, the journalists occasionally included quotes from some particularly fanatic and irrational Israeli Communist named Israel Shahak, whose views seemed totally at odds with those of everyone else, and who was consequently treated as a fringe figure. Opinions that seem totally divorced from reality tend to stick in one’s mind, and it took only one or two appearances from that apparently die-hard and delusional Stalinist for me to guess that he would always take an entirely contrary position on every given issue.

In 1982 Israel Defense Minister Ariel Sharon launched his massive invasion of Lebanon using the pretext of the wounding of an Israeli diplomat in Europe at the hands of a Palestinian attacker, and the extreme nature of his action was widely condemned in the media outlets I read at the time. His motive was obviously to root out the PLO’s political and military infrastructure, which had taken hold in many of Lebanon’s large Palestinian refugee camps. But back in those days invasions of Middle Eastern countries on dubious prospects were much less common than they have subsequently become, after our recent American wars killed or displaced so many millions, and most observers were horrified by the utterly disproportionate nature of his attack and the severe destruction he was inflicting upon Israel’s neighbor, which he seemed eager to reduce to puppet status. From what I recall, he made several entirely false assurances to top Reagan officials about his invasion plans, such that they afterward called him the worst sort of liar, and he ended up besieging the Lebanese capital of Beirut even though he had originally promised to limit his assault to a mere border incursion.

The Israeli siege of the PLO-controlled areas of Beirut lasted some time, and negotiations eventually resulted in the departure of the Palestinian fighters to some other Arab country. Shortly afterward, the Israelis declared that they were moving into West Beirut in order to better assure the safety of the Palestinian women and children left behind and protect them from any retribution at the hands of their Christian Falangist enemies. And around that same time, I noticed a long letter in The Economist by Shahak which seemed to me the final proof of his insanity. He claimed that it was obvious that Sharon had marched to Beirut with the intent of organizing a massacre of the Palestinians, and that this would shortly take place. When the slaughter indeed occurred not long afterward, apparently with heavy Israeli involvement and complicity, I concluded that if a crazy Communist fanatic like Shahak had been right, while apparently every mainstream journalist had been so completely wrong, my understanding of the world and the Middle East required total recalibration. Or at least that’s how I’ve always remembered those events from a distance of over thirty-five years.

During the years that followed, I still periodically saw Shahak’s statements quoted in my mainstream publications, which sometimes suggested that he was a Communist and sometimes not. Naturally enough, his ideological extremism made him a prominent opponent of the 1991 Oslo Peace Agreement between Israel and the occupied Palestinians, which was otherwise supported by every sensible person, though since Oslo ended up being entirely a failure, I couldn’t hold it too strongly against him. I stopped paying much attention to foreign policy issues during the 1990s, but I still read my New York Times every morning and would occasionally see his quotes, inevitably contrarian and irredentist.

Then the 9/11 attacks returned foreign policy and the Middle East to the absolute center of our national agenda, and I eventually read somewhere or other that Shahak had died at age 68 only a few months earlier, though I hadn’t noticed any obituary. Over the years, I’d seen some vague mention that during the previous decade he’d published a couple of stridently anti-Jewish and anti-Zionist books, just as might be expected from a hard-line Communist fanatic, and during the early 2000s I started seeing more and more references to these works, ironically coming from fringe sources of the anti-Semitic Far Right, thereby once again proving that extremists flock together. Finally, about a decade ago, my curiosity got the better of me and clicking a few buttons on Amazon.com, I ordered copies of his books, all of which were quite short.

 To continue reading see, below, the table of contents. 

 

Readers' Comments here.

 

 

 

 

 

 

 

END

 

The Artificial Intelligence Hoax

03.06.23 | Duarte Pacheco Pereira

AI 253044 chip

New on MoA: ‘Artificial Intelligence’ Is (Mostly) Glorified Pattern Recognition

 

‘Artificial Intelligence’ Is

(Mostly) Glorified Pattern Recognition

 

This somewhat funny narrative about an ‘Artificial Intelligence’ simulation by the U.S. airforce appeared yesterday and got widely picked up by various mainstream media (SEAD = Suppression of Enemy Air Defenses, SAM = Surface to Air Missile):

However, perhaps one of the most fascinating presentations came from Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF, who provided an insight into the benefits and hazards in more autonomous weapon systems.
… 
He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”


In the earl 1990s I worked at a University, first to write a Ph.D. in economics and management and then as associated lecturer for IT and programming. A large part of the (never finished) Ph.D. thesis was a discussion of various optimization algorithms. I programmed each and tested them on training and real world data. Some of those mathematical algos are deterministic. They always deliver the correct result. Some are not deterministic. They just estimated the outcome and give some confidence measure or probability on how correct the presented result may be. Most of the later involved some kind of Bayesisan statistics. Then there were the (related) 'Artificial Intelligence' algos, i.e. 'machine learning'.

Artificial Intelligence is a misnomer for the (ab-)use of a family of computerized pattern recognition methods.

Well structured and labeled data is used to train the models to later have them recognize 'things' in unstructured data. Once the 'things' are found some additional algorithm can act on them.

I programmed some of these as backpropagation networks. They would, for example, 'learn' to 'read' pictures  of the numbers 0 to 9 and to present the correct numerical output. To push the 'learning' into the right direction during the serial iterations that train the network one needs a reward function or reward equation. It tells the network if the results of an iteration are 'right' or 'wrong'. For 'reading' visual representations of numbers that is quite simple. One sets up a table with the visual representations and manually adds the numerical value one sees. After the algo has finished its guess a lookup in the table will tell if it were right or wrong. A 'reward' is given when the result was correct. The model will reiterate and 'learn' from there.

Once trained on numbers written in Courier typography the model is likely to also recognize numbers written upside down in Times New Roman even though they look different.

The reward function for reading 0 to 9 is simple. But the formulation of a reward function quickly evolves into a huge problem when one works, as I did, on multi-dimensional (simulated) real world management problems. The one described by the airforce colonel above is a good example for the potential mistakes. Presented with a huge amount of real world data and a reward function that is somewhat wrong or too limited a machine learning algorithm may later come up with results that are unforeseen, impossible to execute or prohibited.

Currently there is some hype about a family of large language models like ChatGPT. The program reads natural language input and processes it into some related natural language content output. That is not new. The first Artificial Linguistic Internet Computer Entity (Alice) was developed by Joseph Weizenbaum at MIT in the early 1960s. I had funny chats with ELIZA in the 1980s on a mainframe terminal. ChatGPT is a bit niftier and its iterative results, i.e. the 'conversations' it creates, may well astonish some people. But the hype around it is unwarranted.

Behind those language models are machine learning algos that have been trained by large amounts of human speech sucked from the internet. They were trained with speech patterns to then generate speech patterns. The learning part is problem number one. The material these models have been trained with is inherently biased. Did the human trainers who selected the training data include user comments lifted from pornographic sites or did they exclude those? Ethics may have argued for excluding them. But if the model is supposed to give real world results the data from porn sites must be included. How does one prevent remnants from such comments from sneaking into a conversations with kids that the model may later generate? There is a myriad of such problems. Does one include New York Times pieces in the training set even though one knows that they are highly biased? Will a model be allowed to produce hateful output? What is hateful? Who decides? How is that reflected in its reward function?

Currently the factual correctness of the output of the best large language models is an estimated 80%. They process symbols and pattern but have no understanding of what those symbols or pattern represent. They can not solve mathematical and logical problems, not even very basic ones.

There are niche applications, like translating written languages, where AI or pattern recognition has amazing results. But one still can not trust them to get every word right. The models can be assistants but one will always have to double check their results.

Overall the correctness of current AI models is still way too low to allow them to decide any real world situation. More data or more computing power will not change that. If one wants to overcome their limitations one will need to find some fundamentally new ideas.

Moon of Alabama | Posted by b on June 2, 2023 at 13:06 UTC | Permalink

 

Two selected comments

from people that have engineering and/or computer programming experience 


Artificial it is. Intelligence? Only to the extent of it's programming and hardware.
It cannot feed itself, it must get it's energy from others.
It has no capacity to think/decide outside of the boundaries set by it's programmers, nor does it likely have any awareness of anything outside of it's internal boundaries.
All inputs/sensors are the totality of it's awareness and I have seen people killed that relied on "sensors".
AI is a modern day Hoola-Hoop. Lots of hype, not much rubber on the road.
It's fatal DNA? Human design.

Posted by: kupkee | Jun 2 2023 13:57 utc | 14

 

Indeed, good to see a bit of realism inserted into the hype. My personal background is applying engineering problems to IT, i.e. creating applications for engineering.

Most of the time, a deterministic solution exists for engineering problems, e.g. using Finite Element Analysis. To solve deterministic problems you can throw processing power at it. I have seen that people with less understanding, but more susceptible to hype are trying to solve deterministic problems using “AI”. This is also because it is easy to get acceptance to such “popular” activities from equally inexperienced managers. I have seen this happen and even if it loses in competition with deterministic solutions, those people don't learn and try the same thing over and over. Maybe “machine learning” is more popular among people with poor ability of “human learning”.

Posted by: Norwegian | Jun 2 2023 14:05 utc | 17

 

Mechanical Turk

Mechanical Turk

 

 

 

 

 

 

END