mastodon.bida.im is part of the decentralized social network powered by Mastodon.
Un'istanza mastodon antifascista. autogestita, italofona con base a Bologna. Rispettosa di privacy e anonimato.

Server stats:

928
active users

Learn more

#deepseek

14 posts14 participants0 posts today

"Nenhuma das plataformas de inteligência artificial generativa mais populares do mercado cumpre os requisitos mínimos da Lei Geral de Proteção de Dados, indica um estudo da FGV (Fundação Getúlio Vargas). Essa norma visa proteger usuários brasileiros de práticas de vigilância e de distorções em relações comerciais.

Os pesquisadores do Centro de Tecnologia e Sociedade, da FGV Direito no Rio de Janeiro, avaliaram as políticas de privacidade das seis plataformas de IA mais acessadas na internet, de acordo com dados do site App Magic. Acrescentaram ao grupo a Meta AI, que fica integrada a serviços populares e de acesso subsidiado, como o WhatsApp, o Facebook e o Instagram.

A chinesa DeepSeek (5 de 14 itens) e o Grok (6 de 14), de Elon Musk, descumprem mais da metade dos critérios elencados no Guia de Segurança da Informação para Agentes de Tratamento de Pequeno Porte, publicado pela ANPD (Autoridade Nacional de Proteção de Dados) em 2021. Os pesquisadores escolheram essas recomendações, que não são vinculantes, como um referencial mínimo, já que o material foi produzido para negócios com poucos recursos.

Essas duas ferramentas, por exemplo, não têm uma versão da política de privacidade traduzida para português.

A DeepSeek nem sequer menciona informações de contato do encarregado pelo tratamento de dados, a pessoa ou entidade encarregada por receber solicitações dos usuários e se reportar ao regulador em nome da empresa."

www1.folha.uol.com.br/tec/2025

Folha de S.Paulo · Nenhuma plataforma de IA cumpre requisitos da lei de dados brasileira, diz estudoClaude, Gemini e MetaAI são ferramentas mais adequadas, enquanto Grok e Deepseek são destaques negativos

US authors’ copyright lawsuits against OpenAI and Microsoft combined in New York with newspaper actions

California cases over AI trainers’ use of work by writers including Ta-Nehisi Coates and Michael Chabon transferred to consolidate with New York suits from John Grisham and Jonathan Franzen and more

theguardian.com/books/2025/apr

The Guardian · US authors’ copyright lawsuits against OpenAI and Microsoft combined in New York with newspaper actionsBy Ella Creamer

People have low expectations for AI. I asked ChatGPT (paid account, not the free stuff) what would make ChatGPT better. I think the answer is a lot higher quality than what people keep saying one should expect. This is what I got:

--- My Prompt:
With DeepSeek, there are things it is very good at, but it fails quickly and rather spectacularly. Your models tend to break down as well, but somewhere there must be a model that will be able to stay on topic, even when the topic changes, so that conversations move around like human conversations move around subject-to-subject. What would take ChatGPT over the top?

--- ChatGPT reply:
Here’s what it’ll take to get closer to that human-like conversation flow:

  1. Long-Term Memory That Actually Works. Most models either have no memory or they remember only in a clunky, artificial way.

A better memory system would not just recall facts, but contextual relevance—like “we talked about LLM efficiency two days ago and you hate cheerleader language.”

It has to be smart recall, not just data storage.

  1. Thread Awareness
    Imagine a model that can say: “You mentioned DeepSeek earlier. Should I compare it again now that we’ve brought up open-source tools?”
    That requires models to track not just words, but conversational branches—like a tree, not a straight line.

  2. Graceful Fallbacks
    When a model doesn’t know, it should gracefully pivot, not hallucinate or go off-topic. This kind of conversational “stability” is rare and subtle.

Most models that fail “spectacularly,” as you said, are trying to bluff through complexity instead of narrowing scope or clarifying.

  1. Hybrid Contextual ReasoningThe future model needs to reason across subjects rather than switching on and off.

If we're talking about language models, and you shift to science fiction, it should be able to bring up Asimov or neural networks in storytelling—bridging topics, not just hopping between them.

Right now, most models either:

Stay sharp but shallow (good at Q&A, bad at memory), or

Drift into friendly nonsense (chatty, but brittle)

The sweet spot will be a system that grows with you across time and topics—and doesn’t forget what you value in a conversation.

....

When you see stuff posted online meant to minimize the quality of output possible, This was a single question with no leader. I think this is the kind of answer we want from our AI devices.

Thoughts? Feel free to tear this response apart. It needs that kind of human analysis.

#ai#openai#deepseek