[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[dvd-discuss] DMCA - creative surfing...



On Tue, 30 Oct 2001, Scott A Crosby wrote:

> Hello.. Over the last week, I've been doing some robot-assisted web
>...
> Thinking it over, I thought that two of my actions could be DMCA
> anticircumvention violations, and that *any* web page that links to any
>...
> creator, so that part is satisfied.
> Now, I was being creative. If I saw a URL like:
>  *   http://www.example.com/screenshots01/index.html
> that had something I wanted, I would try to see if
>  *   http://www.example.com/screenshots02/index.html
> existed and had anything interesting.
> If it did, I'd decide whether or not to grab it, and continue.
> So, my first question: Was this 'circumventing a technological measure
> that effectively controls access to a work'?
> It might be; they may not have intended me to know that '02' comes after
> '01'.

Well put it this way:
I have some pages on my web site that I purposely have no links to, for
example:

http://atari-source.com/~nsilva/ is my main personal page.
I might have
http://atari-source.com/~nsilva/secret/

and I type that URL to get the "secret" portion.

This is meant to be private, but easy enough to give out to my
friends... I certainly wouldn't consider it real security though.

There are measures (.htaccess) easily enough built into web servers to at
least actually try to restrict access.  The fact that I didn't bother to
use them means that I am not very serious about protecting that second
page.

If the page gets linked to, even once (say, by this email ending up on a
public archive), it may end up in google, etc. 

> Now, there was another action I did. some websites, although they were
> free to the public and I did not guess URL's like above, they tried to
> detect robots and block their access for 5 minutes. (By using a magic URL
> and seeing if the person downloads it.)
...
> I could have clicked each image on the website and saved it manually.
> Instead, I configured my robot to *not* access to that file and avoid the
> trap.
...
> So here, one could say that they had a technology measure intended to
> effectively control certain types [robotic] of access to a work. But not
> intended to actually restrict other types of access to the work.
...
> So, is this a violation of the anticircumvention provisions; me telling my
> robot to *not* download a particular file?

It is probably a violation of their wishes, but I don't see it as a
circumvention issue.  If they really didn't want people to download
them... they shouldn't put them on a public web server! period!  I am sure
their intent was thay they wanted the images to be there for people to see
when viewing their web pages but didn't want anyone to be able to suck
down all of them for use to sell, etc.

> On a similar vein and more hypothetical.
> 
> Many people use the secrecy of a URL as a security mechanism, in that as
> long as nobody can guess the URL, whatever private website they have is
> not downloadable.

(see above)
 
> But, as soon as any link into that 'protected' area gets made, say, some
> legitimate user of that web space makes a link to it on their home page.
> Eventually google will notice that link, and find itself indexing that
> 'protected area' and offering it as search results.

(lol how can you read my mind so well, see above..)
 
> In this case, I would hope the law wouldn't fault google. It was just
> following links like it always does.
 
That's like faulting someone who finds a "secret" door on public property.

> But, the law could fault the person who unwittingly linked in.

That's like faulting someone for telling someone else where the bank vault
is.  Hiding it in a room may keep people from taking the money, but if you
_really_ wanted to stop it, you should at least lock the vault.
 
> For example, say you have a discussion site thats supposed to be private.
> One of the users unwittingly posts a link to it on their home page, and
> another user dislikes the fact that google is now reporting results.
> 
> This could mean that there is liability in any web page author linking to
> *any* other web page that they do not control. It also seems related to
> the so-called 'deep linking' court cases of a few years ago.
> 
> So, whats the verdict?

Again... you can't have it both ways.  To be clear: WWW servers are
intended to provide documents for public access.  By putting a document on
your WWW server inside the document root, you are essentially granting
public access.  If you want to hide something by burying it deep in the
tree and having no links to it, fine - so long as you realize that that
isn't a legitimate form of access control.  The deep linking cases are
even more silly, since they are things that WERE linked to.  It seems to
me there that the companies didn't like the nature of the web.  Fine then,
but don't publish on it, it's their choice.  That's like me selling CDs,
but trying to sue people for using the "skip track" feature.  There is an
easy to use, somewhat effective protection measure built into HTTP and
almost all web servers.  It isn't the most secure thing in the world, as
it sends the passwords in plain text, but it certainly shows intent.  If
you want any kind of protection from the law, you should use it.


 -- noah silva