Here is the prefacing article:
Are bots entitled to freedom of speech?
First amendment wording, of course, makes vague construed meaning of speech though it is probably more clearly implied communication.
The answer to the question is yes, probably, bots could potentially be safeguarded as having protected forms of speech since neither does the amendment differentiate really who is doing the speaking. That is speech could be individuals, groups of individuals, institutions, businesses, and have multiple authors. Secondly, even if parsing the difference between human and non human in such distinction, the embedded authorship and protections of this, could be construed as protected as well, could it not? The amendment again does not make distinction between whether such speech originates from human or not, nor distinction given to citizenship or anything else. The amendment in the most abstract form reads to protect speech.
Now online speech on private servers may have limitations as given by the distinction of public and private places. Technically any twitter bot, could be banned (as has happened) even if the originating speech was technically coded by a human author...in other words, even if the authorship of such speech were ruled human, still limitations of speech in private spaces apply.
Businesses are likewise curtailed even in public forums the right of speech in so far as advertising (signage space and format, for instance), and thus even so time restrictions could apply to say bot advertising spam in public spaces in the future as given by any locality (or broader) laws passed.
Whether you like the bot, because it truly cares or misrepresents itself as human, for instance, is irrelevant in view of the law. This doesn't mean, of course, that online the bot always has protected speech. Obviously, twitter banned a number of bots on its servers,, and has its own reason to do so, most excepting those alt right people having lost a number of 'friend' voices might complain. Other than they, whether bots have freedom of speech is probably irrelevant to issues of bot communication and handling of other issues...why need to pass laws in the first place in other words with respect to curtailing bot speech?
As it has been suggested in article that bots and 'fake news' going hand in hand have played a role in offering much 'misleading content'. That being said, and not equally applicable to groups of individuals or governments in the past having done much the same. Bots may serve in automation being able to scale volume of information but in many respects are as good (at present) as the human programmers that have conceived of manners in which to manipulate people. Secondly, all the tools for scaled dissemination of 'misleading news' exists as easily for people. I raise the point: what is the distinction between task automation and 'bots'? In this sense, I would ask what delineates tools of scale and automation relative to a bot having performed such tasks excepting that a human was required to do multiples of such task relative to having instructed a bot to perform all of these, nonetheless, by degrees of difference, task automation were involved in either process. In other words, if the 'bot' writes the email as a well as compiling the mass mailing of deception, relative to the human that task wise does the same thing, only not having the bot at his or her disposal, what makes one so different relative another? The tools of task automation (email address lists), and so forth, are still there...is such task automation worthy of the characteristic 'bot' and in the descriptive of censorship not protected speech when it is used and employed? This indicates only the legal and logical problems of parsing distinctions of 'bot' if any such definition is construed for the purposes of law making legality of speech. It is, perhaps, not only cumbersome but potentially absurd and labyrinthine.
Perhaps, the future looks different in this respect, if machine learning systems have 'learned' better ways in deceiving people, and there is an unwillingness to tackle systemic issues in this respect.
As it has been suggested in article that bots and 'fake news' going hand in hand have played a role in offering much 'misleading content'. That being said, and not equally applicable to groups of individuals or governments in the past having done much the same. Bots may serve in automation being able to scale volume of information but in many respects are as good (at present) as the human programmers that have conceived of manners in which to manipulate people. Secondly, all the tools for scaled dissemination of 'misleading news' exists as easily for people. I raise the point: what is the distinction between task automation and 'bots'? In this sense, I would ask what delineates tools of scale and automation relative to a bot having performed such tasks excepting that a human was required to do multiples of such task relative to having instructed a bot to perform all of these, nonetheless, by degrees of difference, task automation were involved in either process. In other words, if the 'bot' writes the email as a well as compiling the mass mailing of deception, relative to the human that task wise does the same thing, only not having the bot at his or her disposal, what makes one so different relative another? The tools of task automation (email address lists), and so forth, are still there...is such task automation worthy of the characteristic 'bot' and in the descriptive of censorship not protected speech when it is used and employed? This indicates only the legal and logical problems of parsing distinctions of 'bot' if any such definition is construed for the purposes of law making legality of speech. It is, perhaps, not only cumbersome but potentially absurd and labyrinthine.
Perhaps, the future looks different in this respect, if machine learning systems have 'learned' better ways in deceiving people, and there is an unwillingness to tackle systemic issues in this respect.