GET IN NOW: Accountability hacks for 2017 privacy resolve

There are ways to personally escalate privacy rights accountability and enforcement with common means.

First, let me explain...

There are a lot of lawyers and scholars in Silicon Valley visibly chewing on how to hold corporate governance accountable. That’s all fine and great, but they’ve been doing that to negligible effect for over 15 years.  Argument coming from Berkeley is privacy “self-regulation has been an abject failure”.  The truth is companies who have a self-regulatory regime are far more accountable than the companies who don’t. 

What isn’t quite being said is that most companies haven’t adopted self-regulatory regimes because of poorly adapted privacy and information security governance.  It is poorly adapted for regulatory compliance due to the selective enforcement or government regulatory outcomes for online data infringements.  Selective enforcement is due to the government business climate and capitol politics.

For example, political parties as non-profits are staffed to the gills with lawyers.  They are private entities who claim responsibility for public interest. They are responsible for voter information. Many of their field campaign offices fail to protect voter or consumer information because it is readily sold by elections offices to their candidate campaigns. Information gathered by an elections office is still subject to the Privacy Act and regulations in the E-Government Act.  Institutions, not unlike the DNC, license their lists, trade information, but lack an accountability structure and basic CISO governance for these exchanges.  No data breach protocol. No insider threat protection. No nothing.  

Since politicos already opted to overlook basic consumer protections as a matter of accepted commercial practice, information security is merely grist for the mill. Until, of course, facts, identities, e-mails and partisan custodial intelligence falls into the hands of an opponent they cannot control, like a foreign government.   

The people in the nexus of government identity mandates: Social Security numbers, drivers, business licensees, all diverse forms and records-as-required by the government seem to be America’s “forgotten” in the political information works. Somewhere in this train is you.

Consumers are increasingly treated like interlopers when it comes to asserting their own interests and claim over their personal information with corporate data holders. To clarify, corporations are acting aggressively on behalf of government mass surveillance interests (i.e. Microsoft, Yahoo!, and AT&T ). Identity theft victims, in particular, feel most helpless about how aggressively the networked world treats them.  Life becomes very difficult if you are treated as an imposter in your own transactional affairs. 

Even when you’re not getting Hoovered into the National Security aperture or a Smart City botnet by government data holders, your information security prospects are still in a position of risk, especially if you are poor.  Poor may be somewhat defined as persons who rely on a government agency for assistance with housing, food, education or other needs. The poor are required to produce exhaustive and invasive amounts of information to qualify for government aid. They don’t have money but they do have information. 

This is also the subsidy strategy for the Internet’s freemium business model.  

If you were to conclude that pro-surveillance partisans and Silicon Valley share, not just the same business designs and information base, but the same legal academics, the same financial stewards, the same whitewashed non-profit interests and the same PACs, I would say you're right.

Yes. Your individual privacy failure is their Palo Alto and Medina mortgage fodder. 

Consumers will need much more personal recourse now and a knowledge base to help them self-advocate.

Consumers need self-advocacy now because:
 

  • Non-profit advocacy groups don’t or won’t take the case or their efforts are stunted by political leveraging.
     
  • Government accountability offices (FTC, Attorney’s General, PCLOB, OMB) won’t move on your complaint or place need of an attorney-at-work or political efficacy ahead of due diligence.
     
  • Government accountability wheels move far too slowly to respond to your personal crisis.

  • Government offices have an unfunctional, outdated or non-existent information security governance architecture. You’re better off not delegating any further personal information to them.
     
  • Civil lawsuits are way out of reach if you are homeless or in dire economic straits. Discovery would expose your vulnerabilities further in a ‘pay-to-stay’ court system.

  • Privacy is not insured by claims of helpful media pundits or the vicarious participation of expert scholars. It is only insured by the integrity of your personal efforts.
     
  • Corporations ignore or combat personal requests to stop licensed use of personally identifiable information.  They may put other non-lawful qualifiers to stop their actions, like a request of identity theft or DRM copyright infringement claim when it didn’t happen.  If you submit to their request, you become a liar and your claim will not stand in court.  They keep licensing your information.

  • US permanent political class function as an adversary to the legal privacy rights of its constituents while staffing up privacy place holders to protect themselves from your lawsuits.

I can demonstrate guidance for people who have reached a personal crisis threshold between corporate and government authorities.  I am willing to teach others at negligible cost for a limited time only.

If you are interested in personal privacy coaching or a privacy impact assessment consultation, I am making my services available to members of the general public and not just businesses, StartUps and law firms.

If you would like to teach privacy to people who are interested, I am interested in you. Community mentorship for privacy accountability will be vital for public and private accountability.  We know the information space between the two occasionally merges against the interests and the will of the people.  To even up the odds, I would help you for minimal cost as a way to innoculate community knowledge centers for personal privacy. 

If someone interested in helping my public works connection security or you would like to be an information security sponsor – either as a co-instructor or as a infosec service provider, I would like to hear from you.

If you are a web or video media producer and you possess a high interest in the subject matter, I would provide you with content and guest instructors.

If you are a marketing maven and you would like to help monetize this effort to sustain it as a commercial opportunity for the future of individual privacy education needs and to actually make a buck, lets look at your sales work.  

If you want privacy done right, you have to do it yourself. Let me help.

Use the contact form to be in touch. 

 

SOURCES:

http://www.digitaltrends.com/computing/yahoo-surveillance-microsoft-google/

https://theintercept.com/2016/08/04/microsoft-pitches-technology-that-can-read-facial-expressions-at-political-rallies/

https://www.wired.com/2015/08/know-nsa-atts-spying-pact/

http://www.networkworld.com/article/3123672/security/largest-ddos-attack-ever-delivered-by-botnet-of-hijacked-iot-devices.html

 https://www.incapsula.com/blog/malware-analysis-mirai-ddos-botnet.html

https://www.eff.org/deeplinks/2016/04/rule-41-little-known-committee-proposes-grant-new-hacking-powers-government

http://www.housingwire.com/articles/38573-hud-inadvertently-exposed-personal-information-of-nearly-500000-individuals?eid=311694375&bid=1595299

 

 

 

 

 

 

An Ethical Horizon for Artificial Intelligence: Bias (Part 2/4)

Bias has been afflicting societies since the dawn of civilization.  Machine learning innovations, like money and tools, have great potential for both good and for evil.  An introduction of ethics into the anthropology now of AI may train social intent for good altering the course of human history.

Read more

An Ethical Horizon for Artificial Intelligence (Part 1/4)

 

Self-governance leadership can improve the future of AI, if companies are brave enough to adopt ethical tools and new business model leadership now.

PART 1

Artificial Intelligence is mature enough for professional ethics, but legal and academic haggling could roll on for many years; as it has with privacy policy governance.  We are in a world quick to fund and produce weaponized artificial intelligence. Commercial AI is leading a quietly unchallenged data reign, relatively unfettered by ethical disciplines.  When examples of poor ethical behaviour involving AI abound, the consumer public can’t necessarily afford to wait for policy wonks to emerge with a brand of consensus.

A significant percentage of the US academic community enamored with AI will continue to enable power differentials actively harming human rights interests. If you leave subjective ethical preferences to academic AI developers exclusively, you may wait behind the political will of public grant funders.  If you leave ethics to the companies who use and market AI, you might invoke consumer or market preferences, take your business elsewhere and still feel the effects of encroachment.

The future would be bright if a business gets a hold of conscious capital principles. For example. the health food market started out rough. It improved every 2-3 years with better quality food sources, increasingly diverse options and adaptation to culinary trends. 30 years later they have managed to pose significant competition to conventional market offerings. Conventional grocers now adopt more health food due to consumer demand. Competition stemming from privacy limitations sharpens the understanding of what is and can be. If you want more privacy in the market, you will have to create it and the environment for it.

Privacy and security positioning shouldn’t take 30 years due to current levels of risk involved.  You also don’t have to wait long because social and technical innovations are already present in the marketplace now.  Smaller companies are in a great position to adopt a flexible level of UI, security and ethics principles from the ground floor.  Larger companies take much longer to retrain their offerings. Loyal consumers should continue to speak up for what they want and affirm the right direction for privacy and security options.

The good news is AI has reached a level of business and adoptive maturity to qualify demand for ethical balances and corporate restraint. Corporate self-governance frameworks can expedite ethics as a deliverable competitive offering to consumers now.  There are de-identification tools and ethics proposals on the table all over the modernizing world from thoughtful social innovators who want computing futures to succeed without harming consumers.

The span of concerns over harm are proportional to AI’s ubiquitous presence in the marketplace. Big Data (machine learning), the Internet of Things(IoT), and drone robotics are examples of AI innovation bearing conflict to human interest.  Social innovation can help manage need in key areas flagged for ethical safeguards like: bias, fair information practice and proprietary rights with accountability. 

I will examine each of these areas for social innovation in the coming days.

COMING NEXT..  An Ethical Horizon for Artificial Intelligence, Bias (Part2/4)

4 Damaging Illusions to Consumer Self-Protection Online

 

The Internet is a creative, user-endorsed environment supporting information exchanges for every purpose known to man.  So what is it about Internet use that could be so self-defeating when it comes to consumer privacy?

There are a few best-laid deceptions in the marketplace keeping the Internet hostile to personal privacy. 

“The Internet is free."

Have you ever stopped to ask yourself how the Internet can be a multi-billion dollar business and be free to use by so many?   The truth is that the Internet is not ‘free’. Nothing in life is free. There are costs. 

How the Internet pays for free-to-you services, starts with online beacons; which track, trace and evaluate your traffic and identity. This is usually your home or work IP address via your Internet Service Provider.  After awhile you leave a distinct ‘footprint’ online. Then many marketing algorithms compare among each other.  This all takes place hundreds or thousands of miles away from most online consumers at server farms and data brokerage firms.  The data firms keep tabs on any information you volunteer to the “free” service: age, sexual preferences, when you have free time, if you’re working at work or unemployed, what kind of car you drive, so forth and so on.

Then the firms sell it to whomever is buying.  That is how the Internet pays each other millions to stay in business while you use a “free” account.  You are the product they are selling.

“My personal information is protected by US law.”

Test this unfortunate half-truth.  If the government can hack you and never suffer consequence or a corporation can help themselves to your contacts, with no consequences over a period of years, are you being protected by the law?

There are a wide variety of laws, but no holistic federal law to protect all consumer data and personal information. Protected areas of consumer privacy are scattered through a variety of policy areas: health, employment, driver protections, data breach notification.  Protections also vary from State-to-State in the US, but again, no holistic area of coverage. There’s just a sense of policy running scared from your serial outrage in the Capitols of our country.

Some countries and continents have a national consumer data protection policy or law.  In the US, it varies from agency to agency.  So privacy protection according to US nation State and member States remains as spotty as a Jackson Pollack painting.

The best fix for this problem is a fearless examination of State & Federal privacy laws to cover the areas you are most concerned about.  You can do a casual search online or visit your local law library.  The more informed you are, the better decisions you will make when it comes to who you trust with your information.

“I am owed whatever I can get from the Internet.”

Nothing sets you up for failed privacy results more than presuming that someone else’s server farm, computing code and the staff hired to market and manage your transactional information are beholden to you.  If the Internet architects can fool you into believing the space you rented on the currency of your data is actually yours, you are deceived. 

This illusion is typically dispelled by being booted off or banned by an online moderator. Some have attempted campaigns to collect on online company space because they are avid users. They are likely presented with a document created by a very well paid army of lawyers telling them how the information they fed into their system actually belongs to the company because of an End License User Agreement.  That would be the biggest deception of all.

The only thing you own in the cloud is your information and your data. That never changes. If you want to change the balance of user power, you have to stop feeding the beast the data it needs to thrive.

“I erased my data.”

There’s a saying in the privacy field that ‘data never dies’. It is somewhat true. Forensics teams use the same tactics corporate data recovery pros use, say, after a storm surge knocks out computer networks. That’s great news if involuntary data loss would ruin your business or create financial havoc. However, if you wanted to scrub personal information from the online universe you will need to visit a different kind of reputation specialist, like Privacy Duck

These service specialists address unique data brokerage and reputation conferencing strata called, People Finders, who license personally identifiable information.  People Finders sell your address, location, work, age and contact information to anyone for any price.  An even less legitimate version of this takes place on the dark web to online criminals. 

THE BOTTOM LINE

If you want to better protect your personal information, adopt a consumer privacy regimen for your household. You are always the best gate-keeper of what goes in and what comes out of every information portal of your life.  Digital privacy is a new consumer discipline.  However, it is having increasingly great & powerful results coaching the market to regard your privacy.

You can be the next person in line to demand anonymized data ecosystems like PDDP, HTTPS, increasingly secured communication, encryption, and ad blockers.  If you already use services like Ghostery, Mozilla private browsing services and anonymizing search engines like, Duck Duck Go, you are on the path to reorganizing an online currency system. 

Online businesses continue to put your privacy on the sacrifice altar when they don't have to.  Your part of the business end of your agreement needs to require privacy by design, warrants for your data, and to anonymize data they use in marketing exchanges. 

Demand better protections. They are technically within reach.

 

 

 

 

 

 

 

 

When privacy apologetics are like 'vegan leather'

What is vegan leather? 'Vegan leather' is a term of pretence representing a leather-like product made from vegan materials. The label presumes no animals were harmed in the making of the product. In some stores, you can purchase 'vegan leather' as a dead animal hide dyed in all natural vegetable dyes made from plants.  The leather is not vegan, but the plant dyes are 100% vegan. 

Real vegans typically won't buy 'vegan leather'. They'll buy belts and shoes made from felt, rubber, canvas and vinyl. Fake leather products are not usually labelled 'vegan leather'.  They have labels or tags detailing the nylon or other synthetic materials.  However, you'll never know what kind of 'vegan leather' you might be dealing with unless you investigate further. 

'Vegan leather' can be a misleading marketing term for the ignorant and/or superficial crowds who will buy things to appear more 'conscious', rather than actually being more conscious. What would motivate someone to buy a product to openly exhibit their misappropriate ethics? Whomever they are, they feel compelled to camouflage themselves among those with high ethical standards. This is so they can witness something they'll never be committed to doing unless the standard hits critical mass. If someone is buying vegan leather, the ethical numbers have these actors & actresses on the defense.

So how are privacy apologetics like vegan leather?

Before I say anything, I respect the efforts of all privacy proponents when they actually are being proactive, regarding data ownership and using ethical privacy UX development practices.

However, there's a wide berth between professional practice of "user privacy principles" and realtime market practice of privacy.  That's why you see all the news drama and color between the license and spreadsheet firesales of PII and an employee-caused-breach leading to civil liability.  The truth is somewhere between Privacy by Design and Hasn't-gotten-caught-by-the-FTC. 

For instance, it may feel counterintuitive to ask an institution like the NSA to adopt basic privacy principles, but it isn't.  If the NSA, or any other mass surveillance aperture, is collecting PII and diverse sensitive personal information, they are responsible for protecting that information.  Every other business and institution on the planet has to regard personal data rights or face civil liability.  They must comply with the laws that protect data owners just like the Big Data 4: Google, Microsoft, Facebook and Palantir.

"BEWARE THE API"

The Big Data 4 are also the face of corporate, or privatized, mass surveillance (SEE: PRISM & Snowden Leaks). They still hunt and gather for global intelligence authorities depending on the purchase (or legal) order from mass surveillance authorities on any given day of the week.   

Do they regard privacy?  The answer is, more soberly, "When their lawyers say so." They face federal regulatory conventions that place fetters on their ability to completely disregard user privacy. The difference between them and a hacker who breaks into steal your information is a 15 pg Terms of Service agreement. This rationalizes your consent to trade use of your datasets in exchange for an account or use oftheir "free" service. 

It has turned out to be more of a faustian bargain with the devil. 

So when Facebook and Palantir, both data intelligence gatherers & InQtel startups who own large parcels of Palo Alto Real Estate, put on a Privacy Conference in Sweden it does not seem like authentic privacy standardization at work. By another label, I would call it the privatized Hearts-&-Minds Swedish massage package, as a complimentary consolation prize for sunken US Safe Harbor conventions. Safe Harbor was a years long triumph in privacy apologetics. It is being mourned by people who really don't care about authentic global privacy conventions.  I would call this occurrance a case study in gross privacy apologetics, rather than professional privacy pragmatism.   

I did think, "Oh this is just 'vegan leather' for Euros who 'lost' something in Safe Harbor."

I can assure you Palantir's rendition of 'vegan leather' won't hold a candle to Privacy By Design. Not even close.