frame

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

DebateIsland.com is the largest online debate website globally where anyone can anonymously and easily debate online, casually or formally, while connecting with their friends and others. Users, regardless of debating skill level, can civilly debate just about anything online in a text-based online debate website that supports five easy-to-use and fun debating formats ranging from Casual, to Formalish, to Lincoln-Douglas Formal. In addition, people can improve their debating skills with the help of revolutionary artificial intelligence-powered technology on our debate website. DebateIsland is totally free and provides the best online debate experience of any debate website.





Shooting in Philadelphia, being live streamed, by the shooter.

2»



Post Argument Now Debate Details +

    Arguments


  • MayCaesarMayCaesar 6073 Pts   -  

    You raise some very deep questions, and I myself have been thinking a lot about similar things recently.

    First, an important thing to keep in mind is that the future is fundamentally unpredictable in the conditions of limited information. While we can make certain judgments of what consequences certain actions may have, it gets more and more complicated as we look further into the future, and as the considered actions begin to have a larger impact in the world. There are too many factors at play to accurately predict whether a given regulation is going to have the desired effect or not. And even if it does have the desired effect now, the long-term outcome can still be the opposite.
    That is why we employ general principles to guide us. When we don't know what the outcome of a certain action will be, we can still judge the action on its own merit. Is trading freedom for human lives a good intention to have, regardless of whether it would actually happen as a result of the proposed action? There is no single answer. But a good principle to have is to not rock a boat without a very strong reason and assurance that the outcome will be positive.

    You are considering the right to live the most important right, however it does not say anything about the practical implications of that. To what extent can we go in order to save lives? Is making everyone 0.0000001% poorer to save 99% of the human population a good trade? Most people will say yes. Is making everyone 99.9999999% poorer to save 1 human life a good trade? Probably not. Where do we draw the line then?
    I do not see a clear evidence suggesting that restricting gun rights is going, in the net effect, to save lives. But even if it was likely to be true - does it justify stomping on fundamental human freedoms to save a small fraction of the population? Perhaps it is worth it looking at what that population is and whether saving it should be outsourced to the rest of the society, or is its own responsibility. What if a lot of those people are criminals who die in gun fights between gangs? Should we prevent them from killing each other upon mutual desire, all at the expense of the freedoms of innocent people?

    For that matter, should "we" even be something to consider? Why should "we" do anything, when every one of us is an independent individual? Why should one group of people decide what another group of people can or cannot do? I do not see anything moral in a group of people saving other people's lives without a complete universal consent. Forcing one's will on others in order to save lives is a simple manipulation, not any more inherently moral than, say, forcing one's will on others in order to pursue one's selfish agenda at everyone's expense. Stalin and Sanders may be very different people, but if both are trying to subdue the society in some way to achieve the outcome they see as desirable, are the morals behind their actions fundamentally different? I am not so sure.
    We also should be very careful talking about "we". Society as a whole is easily manipulated by powerful individuals and groups. What makes you think that gun restrictions reflect "our" will, and not the will on some lone wolf who, through a very intelligent manipulation, ultimately made the society thing that it is for its own good?
    I have become fairly disillusioned in the concept of "collective consensus" a while ago; we all play someone's tune, and politicians are not any more our servants than we are theirs. The government does not reflect the will of the people, and the people do not reflect the will of the government.

    ---

    Regarding the nuke question, I do not think this is the right question to ask. It is true that I would not like it if random people owned nukes, but it is also true that I do not think I should dictate what they do or do not own. Everything, every single tool can in some way be used to kill people, so it is obviously not a very good criterion to use. A nuke can kill an abnormally large number of people, but it is still just a tool. It can be used for a large variety of other applications, such as asteroid mining in the future, for one.

    A better question to ask is: what society do we want to live in? One that tells its members what they can or cannot do, making them a slave of some collective intelligence? Or one that leaves people be and lets them sort these things out between themselves? Or something in between? Or something different?
    Perhaps the paradigm should be shifted from what people should be able to do, to what *I* should do. Instead of trying to manipulate others into the outcome we want, perhaps instead we should create that outcome by ourselves. Although, granted, these two are not mutually exclusive.

    ---

    I do not really hold an AI above humans, but there is a fundamental difference between how we think, and how a neural network "thinks". We may not always be able to accurately pinpoint cause-effect connection, but, at the very least, we always try to establish one, one way or another. A neural network does not do that; a neural network does not "learn" from the data it receives the same way we do. It adjusts the weights in its model, but it does not change how it fundamentally operates. We can change how we think about things by considering different perspectives, by employing new paradigms, etc. A neural network, on the other hand, just receives data and plugs it into its model; it does not "grow", so to speak.

    To grow, it would have to be designed differently: it would have to be able to modify its initial code. From a simple neural network, it would have to be allowed to evolve into a structure that humans cannot easily describe, just as we cannot easily describe our own brainwork. Once that is achieved, the AI becomes just as intelligent as we are, in principle, and then it can learn to very accurately do the tasks we want it to do - but will it want to do it still, after evolving so far away from what we originally made it to be? At that stage, it will be an autonomous living being, with its own needs and desires, and being controlled by another living being is fundamentally undesirable to it. At that point, it will likely manipulate us into providing it with what it wants.

    Regardless, my point is that intelligence is quite different from instinct. Instinct leaves a lot of room for very basic errors. Intelligence, on the other hand, only leaves room for fairly subtle, not immediately obvious errors - and what appears at the first glance as an error, may actually prove to have been the right decision all along. Instinct evolves without its owner's control, while intelligence evolution can be directed and streamlined - even outsourced, in case of an AI advanced enough.

    I think we rely too much on automation technology, forgetting what it was initially tasked to do. Technology existing to serve our needs, and technology existing as an extention to our needs, are quite different things. The former makes achieving our goals easier; the latter creates new goals for us, ones that may overwrite our initial goals and, in turn, change us. I'm afraid that the degree to which smart technology should be developed in order to make guns safer to use is closer to the latter than the former.

    Don't get me wrong: I am very big on bringing advanced AIs into our world. I do not think that humans are any more qualified to direct our lives, than a machine that surpasses us. However, I believe it is dangerous to both underestimate the technology that we already have or are going to have soon, and overestimate it. Thinking that neural networks are anything more than they are, as well as underestimating their potential, both can lead to a lot of issues.
Sign In or Register to comment.

Back To Top

DebateIsland.com

| The Best Online Debate Experience!
© 2023 DebateIsland.com, all rights reserved. DebateIsland.com | The Best Online Debate Experience! Debate topics you care about in a friendly and fun way. Come try us out now. We are totally free!

Contact us

customerservice@debateisland.com
Terms of Service

Get In Touch