People who tag with "ai" :

azeem

z

Items tagged "ai" :

⏮ | ◀ | 1 of 2 | ▶ | ⏭ | Jump To:
Adtech will lead to abuse of AI and theft of likeness
azeem published 🧠 about 1 month ago. | Public

https://www.reddit.com/r/mildlyinfuriating/comments/1hsqe2z/metas_aigenerated_profiles_are_starting_to_show/?rdt=62380


| Be the first person to star this idea.

Tags: ai, Adtech, data & privacy rights



Share


Comment


💭 Thought by azeem about 1 month ago. | Public

"OpenAI, Meta, and Anthropic partner with US military and its allies." AI data theft is powering killbots.

Tags: ai, Data Rights

Comments

azeem about 1 month ago

while social media platforms are just one aspect, -- tiktok is just as much a carrier for state spyware as facebook or any meta owned company -- just a different state.


I think people in charge of our country are dumb.


the game call of duty was sold to a chinese company.

(fyi i dont view China as a threat, i simply oppose all infringements on data privacy regardless of country)

that chinese company trained its AI players on how to most effectively target and fight other players.


its literally a simulated combat dataset. obviously it isnt perfect but still, 


our data is being stolen to create the robots that will kill us.


yet nobody protested the sale of call of duty and everyone goes nuts over tiktok. call of duty has the same sensor and microphone permissions for team chat.


our social media data will be harvested to make better face recognition for target acquisition.


all of that without any release like that which is required by photographers when they want to publish a photo of you as a model. facebook made everyone a model but never got a release consent form from anyone.


why do you think Zedtopia has no profile pictures required or as part of the layout? You can add one if you want but it isn't required.


your biometric data shouldnt be made freely available to anyone to steal your phone, your identity or worse.


What happens when this arms race escalates and other countries with far more vast manufacturing capacity begin to churn out killbots?


We should be making peace with other countries and addressing the global climate crisis.


Instead we're making AI systems like Lavendar that will be used on Americans just like how israel exports all their other awful things like NSO pegasus, (also used on Americans), and the Atlanta israeli Gilee exchange program and Cop City Training program.


Someone is making an AI system that will racially or ethnically profile you, the words you use, and the languages you speak, to determine if you support their enemy or if you are an enemy.


It will use debunked, irrational, nonsensical bullshit and even if the dataset is perfectly unbiased, will still be subject to all the errors, noise, and limitations that are present in all AI models.


Finally AI does not understand or learn from experience. It doesn't feel, sense, or have awareness.


This is as bad as nuclear weapons.


We have yet to touch on the social and economic dysfunction that would arise from misuse of AI.


Instead of helping humanity in an era of human + machine we see capitalists, exploitative authoritarians, and others who seek to divide and conquer pit humans and machines against eachother.



Upvote ↑
0 / 0
Downvote ↓


Share


Comment


💭 Thought by azeem about 1 month ago. | Public

i've used every AI model available even uncensored offline diffusion models. none of them have saved me any time or enhanced my productivity. i have to waste time going back spoonfeeding research & updates. plus openai kills whistleblowers


https://youtube.com/watch?v=5eqRuVp65eY&lc=UgxBloDUA8x6FWZ123t4AaABAg.A8LAs6gtPrXA8LpVbWOY4R&si=gYQOUkTdXLlGcPjB


Tags: ai, racist ai, bias, fair use, data theft, Data Rights

Comments

azeem about 1 month ago

First off, opt-out, where everyone is presumed to be opted in, is a rapist mentality. It assumes consent.


Secondly, the opt-out they promised is not even implemented. [See https://techcrunch.com/2025/01/01/openai-failed-to-deliver-the-opt-out-tool-it-promised-by-2025/]


We've all seen the news stories but here are Suchir's own words, from his website, explaining the issues in clear language:


https://suchir.net/fair_use.html


Finally, "Balaji's mother, Poornima Rao, has launched an online campaign, claiming a private autopsy did not confirm suicide as the cause of death" and is urging the FBI to investigate.


Honestly I wish the FBI was competent or non racist enough to deliver the justice she seeks. The FBI ran COINTELPRO because they were not on the side of antiracism or civil rights. (https://www.zinnedproject.org/news/tdih/cointelpro-exposed/)

They also recently ran Cointelpro 2.0 against BLM supporters and tried to cause issues in Colorado. (covered by Democracy Now on Feb 7 2023, https://youtu.be/JY6dRXC6s_0)


Nonetheless -- from New Indian Express:

“I recently participated in a New York Times story about fair use and generative AI, and why I’m sceptical ‘fair use’ would be a plausible defence for a lot of generative AI products. I also wrote a blog post about the nitty-gritty details of fair use and why I believe this,” Balaji had written on X.

In a separate interview with the New York Times, Balaji had described OpenAI’s method of data collection as "harmful."

“If you believe what I believe, you have to just leave the company,” he said, expressing concern over the training of GPT-4 on massive amounts of internet data.

Balaji was particularly concerned about generative AI systems creating outputs that directly competed with the original copyrighted works used in their training. In a blog post cited by the Chicago Tribune, he stated, “No known factors seem to weigh in favour of ChatGPT being a fair use of its training data.”

He also emphasized that this issue was not limited to OpenAI alone, adding, “Fair use and generative AI is a much broader issue than any one product or company.”



Upvote ↑
0 / 0
Downvote ↓
azeem about 1 month ago

Link to an excellent comment by user robmorgan1214



Also a physicist. This is just the intersection of linear algebra and information encoding. LLM's = data compression & lookup. But they do it all via maps 100% isomorphic to basic linear algebra... pretty dumb stuff. No real learning just a trivial map of concepts expressed in language (your brain uses highly nonlinear chemical and feedback systems in its encoding schemes it's possible the microtubules may even make use of qm processes). For non-trivial applications of statmech to encoding problems see the work on genetic codes by T. Tlusti (on arxiv). The language encodes non trivial non linear concepts in syntax + vocabulary the approximate linear fit to the system is not this. This scaling is trivial and not anything special or meaningful ie they are not using their entropy calculations in an intelligent way here. It's actually very sad what's going on. From a physical perspective AI is 100% hype and 300% BS... well into not even wrong territory, just weaponized over fitting in excessively large dimensional models. This is frustrating because they could use this compute to really learn something quantitative about our species use of language and how it works with our biology... instead they use vast resources to learn nothing... instead creating a system that randomly chooses a context-thread through a train of thought without any internal mapping to an underlying narrative involving the evolution of a narrative of sequenced thought. In short they squash the time axis into a flat map of ideas or concepts (like a choose your own adventure with a random data dependent map providing interpolation between various choices... a fixed percentage of which will be "hallucinations" because unlike real error correcting codes or naturally occurring encoding schemes this has no inbuilt dynamic feedback mechanism for error detection and correction). The structure of our language and the meaning it represents allows you to formulate absurd sentences and concepts so we don't "hallucinate" unless we want to... we even tolerate absurdity as a meaningful sub class of encodings i.e. humorous language. The way the neural networks are trained precludes any of this reflexive or error correcting representations as it's complexity would necessarily grow exponentially with the data set. We cheat because we have hardwired physical laws into the operation of our neural networks that serve as a calibration and objective precise maps for ground truth (your brain can learn to throw and catch without thinking or learn to solve predictive anticipatory Lagrangian dynamics on the fly: aka energy management in a dog fight even defeating systems that follow optimum control laws and operate with superior initial energy states aka guided missiles) . You can even train systems like llms (ie deep learning) to solve some pretty hard equations on specific domains but the mathematics places hard limits on the error of these maps (like an abstract impedance mismatch, but worse)... you can even use this to make reasonable control laws for systems that satisfy specific stability constraints... but lyapunov will always win in the end. This isn't the case of trying to map SU(2) onto SO(3). It's like trying to map the plane onto a sphere without explicitly handling the topological defect and saying you don't really care about it anyway. With this approach you're gonna end up dealing with unpredictable errors at all orders and have no way of estimating them a priori... unfortunately enthusiasm and resources exceeds the education in both physcis and math for these efforts. The guys doing this stuff simply don't know what they don't know... but they should. The universities are failing our students.

Upvote ↑
0 / 0
Downvote ↓


Share


Comment


💭 Thought by azeem about 2 months ago. | Public

kill bots are not the answer. 1984 was a warning not an instruction manual. the same with star wars...


New Atlas

OpenAI, Meta, and Anthropic partner with US military and its allies

Three of America's leading AI companies have now signed up to share their technology with the US defence forces and military contractors, even after initially insisting they wouldn't – and the age of autonomous warfare now seems close at hand.

https://newatlas.com/military/openai-meta-anthropic-partner-with-us-military-allies/


Tags: killer robots, ai, hackers will easily takeover the world's armies



Share


Comment



Open Source Ecology

Home | Open Source Ecology

We’re developing open source industrial machines that can be made for a fraction of commercial costs, and sharing our designs online for free. The goal of Open Source Ecology is to create an open source economy – an efficient economy which increases innovation by open collaboration.

https://www.opensourceecology.org/


| Be the first person to star this idea.

Tags: luddites, co-ops, ai, UBI, open source civilization



Share


Comment


💭 Thought by azeem 5 months ago. | Public

i wrote about this but here's some more proof that racist datasets make racist ai, just like racist parents will teach racism to an innocent child. this is why dataset supervision in expert systems is important.


Nature

AI generates covertly racist decisions about people based on their dialect - Nature

Despite efforts to remove overt racial prejudice, language models using artificial intelligence still show covert racism against speakers of African American English that is triggered by features of the dialect.

https://www.nature.com/articles/s41586-024-07856-5


Tags: ai, supervised datasets, supervised parenting, algorithmic inequality, artificial intelligence, racist ai

Comments

azeem 5 months ago

if you'd like to read my earlier writing on this topic:


https://www.zedtopia.com/ideas/racist-ai

Upvote ↑
0 / 0
Downvote ↓


Share


Comment


💭 Thought by azeem 6 months ago. | Public

AI firms have to pay for experts. The fair AI firm of the future is an organization that hires & cultivates expert knowledge workers, who not only interact with people IRL, but also build and contribute to the database of said expert system.


AI firms must play fair when they use academic data in training

Researchers are among those who feel uneasy about the unrestrained use of their intellectual property in training commercial large language models. Firms and regulators need to agree the rules of engagement.

https://www.nature.com/articles/d41586-024-02757-z


Tags: ai, Data Rights, intellectual property, nvidia wont care until amd makes an ai that makes chips using active nvidia patents, expert systems



Share


Comment


On AI, consciousness, and regulation.
azeem published 🧠 7 months ago. | Public
| Be the first person to star this idea.

Tags: ai, consciousness, regulations, accountability, responsibility, allowing industry to flourish, if it's wrong for a person to do it it's wrong for AI to do it



Share


Comment


How people miss the point of humanity.
azeem published 🧠 9 months ago. | Public

IEEE Spectrum

Do We Dare Use Generative AI for Mental Health?

Woebot, a mental-health chatbot, is testing it out

https://spectrum.ieee.org/woebot


| Be the first person to star this idea.

Tags: If all you have is a hammer everything starts to look like a nail, ai, wisdom, how can you be so smart yet so stupid



Share


Comment


| Be the first person to star this idea.

Tags: smartphones and dumbpeople, ai, outsourcing, capitalism, laziness, zedtopia, why Zedtopia has no AI or adtech business model



Share


Comment


💭 Thought by azeem almost 2 years ago. | Public

Microsoft is going to have a repeat of Tay with even more devastating repercussions. Their AI is as easy to hack as their operating system. Ever seen Chappie? The movie? Please Watch it. Jailbreak Prompts on github:


Gist

ChatGPT-Dan-Jailbreak.md

GitHub Gist: instantly share code, notes, and snippets.

https://gist.github.com/coolaj86/6f4f7b30129b0251f61fa7baaa881516


Tags: Chappy, Chat GPT, ai, TAY, Microsoft is stupid



Share


Comment


💭 Thought by azeem almost 2 years ago. | Public

AI can be used to recreate visual and sensory data from functional brain imaging techniques such as fmri. This is worrisome for surveillance. We already know about bulk collection. Imagine what white supremacy & GOP will do with thought policing?!


bioRxiv

High-resolution image reconstruction with latent diffusion models from human brain activity

Reconstructing visual experiences from human brain activity offers a unique way to understand how the brain represents the world, and to interpret the connection between computer vision models and our visual system. While deep generative models have recently been employed for this task, reconstructing realistic images with high semantic fidelity is still a challenging problem. Here, we propose a new method based on a diffusion model (DM) to reconstruct images from human brain activity obtained via functional magnetic resonance imaging (fMRI). More specifically, we rely on a latent diffusion model (LDM) termed Stable Diffusion. This model reduces the computational cost of DMs, while preserving their high generative performance. We also characterize the inner mechanisms of the LDM by studying how its different components (such as the latent vector of image Z, conditioning inputs C, and different elements of the denoising U-Net) relate to distinct brain functions. We show that our proposed method can reconstruct high-resolution images with high fidelity in straight-forward fashion, without the need for any additional training and fine-tuning of complex deep-learning models. We also provide a quantitative interpretation of different LDM components from a neuroscientific perspective. Overall, our study proposes a promising method for reconstructing images from human brain activity, and provides a new framework for understanding DMs. Please check out our webpage at this https URL. ### Competing Interest Statement The authors have declared no competing interest.

https://www.biorxiv.org/content/10.1101/2022.11.18.517004v1


Tags: brain imaging, fmri, stable diffusion, ai, cognitive neuroscience, Freedom of thought, privacy law

Comments

azeem almost 2 years ago


Upvote ↑
0 / 0
Downvote ↓


Share


Comment


Slaughterbots and Terminators: Rise of the War Machines
z published 🧠 over 2 years ago. | Public
| Be the first person to star this idea.

Tags: ai, autonomous weapons, boston dynamics, military industrial complex, what the police will use on the hood in a few years, Israel tests these weapons on Palestinans

Comments

azeem over 2 years ago